00:00:00.000 Started by upstream project "autotest-per-patch" build number 126196 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "jbp-per-patch" build number 23956 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.060 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.061 The recommended git tool is: git 00:00:00.061 using credential 00000000-0000-0000-0000-000000000002 00:00:00.064 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.098 Fetching changes from the remote Git repository 00:00:00.100 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.136 Using shallow fetch with depth 1 00:00:00.136 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.136 > git --version # timeout=10 00:00:00.175 > git --version # 'git version 2.39.2' 00:00:00.175 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.192 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.192 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/40/22240/22 # timeout=5 00:00:05.240 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.253 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.267 Checking out Revision 5fe533b64b2bcae2206a8f61fddcc62257280cde (FETCH_HEAD) 00:00:05.267 > git config core.sparsecheckout # timeout=10 00:00:05.277 > git read-tree -mu HEAD # timeout=10 00:00:05.295 > git checkout -f 5fe533b64b2bcae2206a8f61fddcc62257280cde # timeout=5 00:00:05.318 Commit message: "jenkins/jjb-config: Add support for native DPDK build into docker-autoruner" 00:00:05.318 > git rev-list --no-walk 74850c0aca59a95b8f6e0c0ea246ac78dd77feb5 # timeout=10 00:00:05.407 [Pipeline] Start of Pipeline 00:00:05.420 [Pipeline] library 00:00:05.422 Loading library shm_lib@master 00:00:05.422 Library shm_lib@master is cached. Copying from home. 00:00:05.444 [Pipeline] node 00:00:05.454 Running on CYP13 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.456 [Pipeline] { 00:00:05.468 [Pipeline] catchError 00:00:05.470 [Pipeline] { 00:00:05.487 [Pipeline] wrap 00:00:05.497 [Pipeline] { 00:00:05.502 [Pipeline] stage 00:00:05.503 [Pipeline] { (Prologue) 00:00:05.686 [Pipeline] sh 00:00:05.969 + logger -p user.info -t JENKINS-CI 00:00:05.998 [Pipeline] echo 00:00:05.999 Node: CYP13 00:00:06.006 [Pipeline] sh 00:00:06.308 [Pipeline] setCustomBuildProperty 00:00:06.322 [Pipeline] echo 00:00:06.324 Cleanup processes 00:00:06.331 [Pipeline] sh 00:00:06.626 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.626 351276 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.687 [Pipeline] sh 00:00:06.995 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.995 ++ grep -v 'sudo pgrep' 00:00:06.995 ++ awk '{print $1}' 00:00:06.995 + sudo kill -9 00:00:06.995 + true 00:00:07.008 [Pipeline] cleanWs 00:00:07.016 [WS-CLEANUP] Deleting project workspace... 00:00:07.016 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.023 [WS-CLEANUP] done 00:00:07.027 [Pipeline] setCustomBuildProperty 00:00:07.038 [Pipeline] sh 00:00:07.319 + sudo git config --global --replace-all safe.directory '*' 00:00:07.395 [Pipeline] httpRequest 00:00:07.431 [Pipeline] echo 00:00:07.432 Sorcerer 10.211.164.101 is alive 00:00:07.438 [Pipeline] httpRequest 00:00:07.442 HttpMethod: GET 00:00:07.442 URL: http://10.211.164.101/packages/jbp_5fe533b64b2bcae2206a8f61fddcc62257280cde.tar.gz 00:00:07.443 Sending request to url: http://10.211.164.101/packages/jbp_5fe533b64b2bcae2206a8f61fddcc62257280cde.tar.gz 00:00:07.466 Response Code: HTTP/1.1 200 OK 00:00:07.466 Success: Status code 200 is in the accepted range: 200,404 00:00:07.466 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_5fe533b64b2bcae2206a8f61fddcc62257280cde.tar.gz 00:00:33.997 [Pipeline] sh 00:00:34.279 + tar --no-same-owner -xf jbp_5fe533b64b2bcae2206a8f61fddcc62257280cde.tar.gz 00:00:34.297 [Pipeline] httpRequest 00:00:34.321 [Pipeline] echo 00:00:34.323 Sorcerer 10.211.164.101 is alive 00:00:34.333 [Pipeline] httpRequest 00:00:34.337 HttpMethod: GET 00:00:34.338 URL: http://10.211.164.101/packages/spdk_248c547d03bd63d26c50240ccfd7f3cfc99bc650.tar.gz 00:00:34.339 Sending request to url: http://10.211.164.101/packages/spdk_248c547d03bd63d26c50240ccfd7f3cfc99bc650.tar.gz 00:00:34.347 Response Code: HTTP/1.1 200 OK 00:00:34.347 Success: Status code 200 is in the accepted range: 200,404 00:00:34.347 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_248c547d03bd63d26c50240ccfd7f3cfc99bc650.tar.gz 00:01:51.348 [Pipeline] sh 00:01:51.636 + tar --no-same-owner -xf spdk_248c547d03bd63d26c50240ccfd7f3cfc99bc650.tar.gz 00:01:54.220 [Pipeline] sh 00:01:54.506 + git -C spdk log --oneline -n5 00:01:54.506 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:01:54.506 2d30d9f83 accel: introduce tasks in sequence limit 00:01:54.506 2728651ee accel: adjust task per ch define name 00:01:54.506 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:01:54.506 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:01:54.520 [Pipeline] } 00:01:54.538 [Pipeline] // stage 00:01:54.547 [Pipeline] stage 00:01:54.549 [Pipeline] { (Prepare) 00:01:54.570 [Pipeline] writeFile 00:01:54.588 [Pipeline] sh 00:01:54.873 + logger -p user.info -t JENKINS-CI 00:01:54.887 [Pipeline] sh 00:01:55.172 + logger -p user.info -t JENKINS-CI 00:01:55.185 [Pipeline] sh 00:01:55.472 + cat autorun-spdk.conf 00:01:55.472 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.472 SPDK_TEST_NVMF=1 00:01:55.472 SPDK_TEST_NVME_CLI=1 00:01:55.472 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.472 SPDK_TEST_NVMF_NICS=e810 00:01:55.472 SPDK_TEST_VFIOUSER=1 00:01:55.472 SPDK_RUN_UBSAN=1 00:01:55.472 NET_TYPE=phy 00:01:55.480 RUN_NIGHTLY=0 00:01:55.484 [Pipeline] readFile 00:01:55.511 [Pipeline] withEnv 00:01:55.513 [Pipeline] { 00:01:55.523 [Pipeline] sh 00:01:55.841 + set -ex 00:01:55.841 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:55.841 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:55.841 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.841 ++ SPDK_TEST_NVMF=1 00:01:55.841 ++ SPDK_TEST_NVME_CLI=1 00:01:55.841 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.841 ++ SPDK_TEST_NVMF_NICS=e810 00:01:55.841 ++ SPDK_TEST_VFIOUSER=1 00:01:55.841 ++ SPDK_RUN_UBSAN=1 00:01:55.841 ++ NET_TYPE=phy 00:01:55.841 ++ RUN_NIGHTLY=0 00:01:55.841 + case $SPDK_TEST_NVMF_NICS in 00:01:55.841 + DRIVERS=ice 00:01:55.841 + [[ tcp == \r\d\m\a ]] 00:01:55.841 + [[ -n ice ]] 00:01:55.841 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:55.841 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:05.840 rmmod: ERROR: Module irdma is not currently loaded 00:02:05.840 rmmod: ERROR: Module i40iw is not currently loaded 00:02:05.840 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:05.840 + true 00:02:05.840 + for D in $DRIVERS 00:02:05.840 + sudo modprobe ice 00:02:05.840 + exit 0 00:02:05.851 [Pipeline] } 00:02:05.872 [Pipeline] // withEnv 00:02:05.877 [Pipeline] } 00:02:05.892 [Pipeline] // stage 00:02:05.901 [Pipeline] catchError 00:02:05.902 [Pipeline] { 00:02:05.912 [Pipeline] timeout 00:02:05.912 Timeout set to expire in 50 min 00:02:05.913 [Pipeline] { 00:02:05.927 [Pipeline] stage 00:02:05.929 [Pipeline] { (Tests) 00:02:05.946 [Pipeline] sh 00:02:06.229 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:06.229 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:06.229 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:06.229 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:06.229 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:06.229 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:06.229 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:06.229 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:06.229 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:06.229 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:06.229 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:06.229 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:06.229 + source /etc/os-release 00:02:06.229 ++ NAME='Fedora Linux' 00:02:06.229 ++ VERSION='38 (Cloud Edition)' 00:02:06.229 ++ ID=fedora 00:02:06.229 ++ VERSION_ID=38 00:02:06.229 ++ VERSION_CODENAME= 00:02:06.229 ++ PLATFORM_ID=platform:f38 00:02:06.229 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:06.229 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:06.229 ++ LOGO=fedora-logo-icon 00:02:06.229 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:06.229 ++ HOME_URL=https://fedoraproject.org/ 00:02:06.229 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:06.229 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:06.229 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:06.229 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:06.229 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:06.229 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:06.229 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:06.229 ++ SUPPORT_END=2024-05-14 00:02:06.229 ++ VARIANT='Cloud Edition' 00:02:06.229 ++ VARIANT_ID=cloud 00:02:06.229 + uname -a 00:02:06.229 Linux spdk-cyp-13 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:06.229 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:09.530 Hugepages 00:02:09.530 node hugesize free / total 00:02:09.530 node0 1048576kB 0 / 0 00:02:09.530 node0 2048kB 0 / 0 00:02:09.530 node1 1048576kB 0 / 0 00:02:09.530 node1 2048kB 0 / 0 00:02:09.530 00:02:09.530 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.530 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:09.530 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:09.530 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:09.530 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:09.530 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:09.530 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:09.530 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:09.530 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:09.530 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:09.530 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:09.530 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:09.530 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:09.530 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:09.530 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:09.530 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:09.530 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:09.530 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:09.530 + rm -f /tmp/spdk-ld-path 00:02:09.530 + source autorun-spdk.conf 00:02:09.530 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.530 ++ SPDK_TEST_NVMF=1 00:02:09.530 ++ SPDK_TEST_NVME_CLI=1 00:02:09.530 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:09.530 ++ SPDK_TEST_NVMF_NICS=e810 00:02:09.530 ++ SPDK_TEST_VFIOUSER=1 00:02:09.530 ++ SPDK_RUN_UBSAN=1 00:02:09.530 ++ NET_TYPE=phy 00:02:09.530 ++ RUN_NIGHTLY=0 00:02:09.530 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:09.530 + [[ -n '' ]] 00:02:09.530 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:09.530 + for M in /var/spdk/build-*-manifest.txt 00:02:09.530 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:09.530 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:09.530 + for M in /var/spdk/build-*-manifest.txt 00:02:09.530 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:09.530 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:09.530 ++ uname 00:02:09.530 + [[ Linux == \L\i\n\u\x ]] 00:02:09.530 + sudo dmesg -T 00:02:09.530 + sudo dmesg --clear 00:02:09.530 + dmesg_pid=352912 00:02:09.530 + [[ Fedora Linux == FreeBSD ]] 00:02:09.530 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.530 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.530 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:09.530 + [[ -x /usr/src/fio-static/fio ]] 00:02:09.530 + export FIO_BIN=/usr/src/fio-static/fio 00:02:09.530 + FIO_BIN=/usr/src/fio-static/fio 00:02:09.531 + sudo dmesg -Tw 00:02:09.531 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:09.531 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:09.531 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:09.531 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.531 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.531 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:09.531 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.531 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.531 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:09.531 Test configuration: 00:02:09.531 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.531 SPDK_TEST_NVMF=1 00:02:09.531 SPDK_TEST_NVME_CLI=1 00:02:09.531 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:09.531 SPDK_TEST_NVMF_NICS=e810 00:02:09.531 SPDK_TEST_VFIOUSER=1 00:02:09.531 SPDK_RUN_UBSAN=1 00:02:09.531 NET_TYPE=phy 00:02:09.791 RUN_NIGHTLY=0 15:07:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:09.791 15:07:19 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:09.791 15:07:19 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.791 15:07:19 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.791 15:07:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.791 15:07:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.791 15:07:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.791 15:07:19 -- paths/export.sh@5 -- $ export PATH 00:02:09.791 15:07:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.791 15:07:19 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:09.791 15:07:19 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:09.791 15:07:19 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721048839.XXXXXX 00:02:09.791 15:07:19 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721048839.ZORXcN 00:02:09.791 15:07:19 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:09.791 15:07:19 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:09.791 15:07:19 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:09.791 15:07:19 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:09.791 15:07:19 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:09.791 15:07:19 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:09.791 15:07:19 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:09.791 15:07:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.791 15:07:19 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:09.791 15:07:19 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:09.791 15:07:19 -- pm/common@17 -- $ local monitor 00:02:09.791 15:07:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.791 15:07:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.791 15:07:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.791 15:07:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.791 15:07:19 -- pm/common@21 -- $ date +%s 00:02:09.791 15:07:19 -- pm/common@21 -- $ date +%s 00:02:09.791 15:07:19 -- pm/common@25 -- $ sleep 1 00:02:09.791 15:07:19 -- pm/common@21 -- $ date +%s 00:02:09.791 15:07:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721048839 00:02:09.791 15:07:19 -- pm/common@21 -- $ date +%s 00:02:09.791 15:07:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721048839 00:02:09.792 15:07:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721048839 00:02:09.792 15:07:19 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721048839 00:02:09.792 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721048839_collect-cpu-load.pm.log 00:02:09.792 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721048839_collect-vmstat.pm.log 00:02:09.792 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721048839_collect-cpu-temp.pm.log 00:02:09.792 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721048839_collect-bmc-pm.bmc.pm.log 00:02:10.734 15:07:20 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:10.734 15:07:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:10.734 15:07:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:10.734 15:07:20 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.734 15:07:20 -- spdk/autobuild.sh@16 -- $ date -u 00:02:10.734 Mon Jul 15 01:07:20 PM UTC 2024 00:02:10.734 15:07:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:10.734 v24.09-pre-208-g248c547d0 00:02:10.734 15:07:20 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:10.734 15:07:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:10.734 15:07:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:10.734 15:07:20 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:10.734 15:07:20 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:10.734 15:07:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.734 ************************************ 00:02:10.734 START TEST ubsan 00:02:10.734 ************************************ 00:02:10.734 15:07:20 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:10.734 using ubsan 00:02:10.734 00:02:10.734 real 0m0.001s 00:02:10.734 user 0m0.000s 00:02:10.734 sys 0m0.000s 00:02:10.734 15:07:20 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:10.734 15:07:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:10.734 ************************************ 00:02:10.734 END TEST ubsan 00:02:10.734 ************************************ 00:02:10.734 15:07:20 -- common/autotest_common.sh@1142 -- $ return 0 00:02:10.734 15:07:20 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:10.734 15:07:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:10.734 15:07:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:10.734 15:07:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:10.734 15:07:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:10.734 15:07:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:10.734 15:07:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:10.734 15:07:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:10.734 15:07:20 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:10.995 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:10.995 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:11.256 Using 'verbs' RDMA provider 00:02:27.105 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:39.365 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:39.365 Creating mk/config.mk...done. 00:02:39.365 Creating mk/cc.flags.mk...done. 00:02:39.365 Type 'make' to build. 00:02:39.365 15:07:47 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:39.365 15:07:47 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:39.365 15:07:47 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:39.365 15:07:47 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.365 ************************************ 00:02:39.365 START TEST make 00:02:39.365 ************************************ 00:02:39.365 15:07:47 make -- common/autotest_common.sh@1123 -- $ make -j144 00:02:39.365 make[1]: Nothing to be done for 'all'. 00:02:39.934 The Meson build system 00:02:39.934 Version: 1.3.1 00:02:39.934 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:39.934 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:39.934 Build type: native build 00:02:39.934 Project name: libvfio-user 00:02:39.934 Project version: 0.0.1 00:02:39.934 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:39.934 C linker for the host machine: cc ld.bfd 2.39-16 00:02:39.934 Host machine cpu family: x86_64 00:02:39.934 Host machine cpu: x86_64 00:02:39.934 Run-time dependency threads found: YES 00:02:39.934 Library dl found: YES 00:02:39.934 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:39.934 Run-time dependency json-c found: YES 0.17 00:02:39.934 Run-time dependency cmocka found: YES 1.1.7 00:02:39.934 Program pytest-3 found: NO 00:02:39.934 Program flake8 found: NO 00:02:39.934 Program misspell-fixer found: NO 00:02:39.934 Program restructuredtext-lint found: NO 00:02:39.934 Program valgrind found: YES (/usr/bin/valgrind) 00:02:39.934 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:39.934 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:39.934 Compiler for C supports arguments -Wwrite-strings: YES 00:02:39.934 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:39.934 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:39.934 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:39.934 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:39.934 Build targets in project: 8 00:02:39.934 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:39.934 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:39.934 00:02:39.934 libvfio-user 0.0.1 00:02:39.934 00:02:39.934 User defined options 00:02:39.934 buildtype : debug 00:02:39.934 default_library: shared 00:02:39.934 libdir : /usr/local/lib 00:02:39.934 00:02:39.934 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:40.192 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:40.192 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:40.192 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:40.192 [3/37] Compiling C object samples/null.p/null.c.o 00:02:40.192 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:40.192 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:40.192 [6/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:40.192 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:40.192 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:40.192 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:40.450 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:40.450 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:40.451 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:40.451 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:40.451 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:40.451 [15/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:40.451 [16/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:40.451 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:40.451 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:40.451 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:40.451 [20/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:40.451 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:40.451 [22/37] Compiling C object samples/server.p/server.c.o 00:02:40.451 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:40.451 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:40.451 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:40.451 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:40.451 [27/37] Compiling C object samples/client.p/client.c.o 00:02:40.451 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:02:40.451 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:40.451 [30/37] Linking target samples/client 00:02:40.451 [31/37] Linking target test/unit_tests 00:02:40.711 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:40.711 [33/37] Linking target samples/gpio-pci-idio-16 00:02:40.711 [34/37] Linking target samples/server 00:02:40.711 [35/37] Linking target samples/lspci 00:02:40.711 [36/37] Linking target samples/null 00:02:40.711 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:40.711 INFO: autodetecting backend as ninja 00:02:40.711 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:40.711 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:40.975 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:40.975 ninja: no work to do. 00:02:47.594 The Meson build system 00:02:47.594 Version: 1.3.1 00:02:47.594 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:47.594 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:47.594 Build type: native build 00:02:47.594 Program cat found: YES (/usr/bin/cat) 00:02:47.594 Project name: DPDK 00:02:47.594 Project version: 24.03.0 00:02:47.594 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:47.594 C linker for the host machine: cc ld.bfd 2.39-16 00:02:47.594 Host machine cpu family: x86_64 00:02:47.594 Host machine cpu: x86_64 00:02:47.594 Message: ## Building in Developer Mode ## 00:02:47.594 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:47.594 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:47.594 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:47.594 Program python3 found: YES (/usr/bin/python3) 00:02:47.594 Program cat found: YES (/usr/bin/cat) 00:02:47.594 Compiler for C supports arguments -march=native: YES 00:02:47.594 Checking for size of "void *" : 8 00:02:47.594 Checking for size of "void *" : 8 (cached) 00:02:47.594 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:47.594 Library m found: YES 00:02:47.594 Library numa found: YES 00:02:47.594 Has header "numaif.h" : YES 00:02:47.594 Library fdt found: NO 00:02:47.594 Library execinfo found: NO 00:02:47.594 Has header "execinfo.h" : YES 00:02:47.594 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:47.594 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:47.594 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:47.594 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:47.594 Run-time dependency openssl found: YES 3.0.9 00:02:47.594 Run-time dependency libpcap found: YES 1.10.4 00:02:47.594 Has header "pcap.h" with dependency libpcap: YES 00:02:47.594 Compiler for C supports arguments -Wcast-qual: YES 00:02:47.594 Compiler for C supports arguments -Wdeprecated: YES 00:02:47.594 Compiler for C supports arguments -Wformat: YES 00:02:47.594 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:47.594 Compiler for C supports arguments -Wformat-security: NO 00:02:47.594 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:47.594 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:47.594 Compiler for C supports arguments -Wnested-externs: YES 00:02:47.594 Compiler for C supports arguments -Wold-style-definition: YES 00:02:47.594 Compiler for C supports arguments -Wpointer-arith: YES 00:02:47.594 Compiler for C supports arguments -Wsign-compare: YES 00:02:47.594 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:47.594 Compiler for C supports arguments -Wundef: YES 00:02:47.595 Compiler for C supports arguments -Wwrite-strings: YES 00:02:47.595 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:47.595 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:47.595 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:47.595 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:47.595 Program objdump found: YES (/usr/bin/objdump) 00:02:47.595 Compiler for C supports arguments -mavx512f: YES 00:02:47.595 Checking if "AVX512 checking" compiles: YES 00:02:47.595 Fetching value of define "__SSE4_2__" : 1 00:02:47.595 Fetching value of define "__AES__" : 1 00:02:47.595 Fetching value of define "__AVX__" : 1 00:02:47.595 Fetching value of define "__AVX2__" : 1 00:02:47.595 Fetching value of define "__AVX512BW__" : 1 00:02:47.595 Fetching value of define "__AVX512CD__" : 1 00:02:47.595 Fetching value of define "__AVX512DQ__" : 1 00:02:47.595 Fetching value of define "__AVX512F__" : 1 00:02:47.595 Fetching value of define "__AVX512VL__" : 1 00:02:47.595 Fetching value of define "__PCLMUL__" : 1 00:02:47.595 Fetching value of define "__RDRND__" : 1 00:02:47.595 Fetching value of define "__RDSEED__" : 1 00:02:47.595 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:47.595 Fetching value of define "__znver1__" : (undefined) 00:02:47.595 Fetching value of define "__znver2__" : (undefined) 00:02:47.595 Fetching value of define "__znver3__" : (undefined) 00:02:47.595 Fetching value of define "__znver4__" : (undefined) 00:02:47.595 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:47.595 Message: lib/log: Defining dependency "log" 00:02:47.595 Message: lib/kvargs: Defining dependency "kvargs" 00:02:47.595 Message: lib/telemetry: Defining dependency "telemetry" 00:02:47.595 Checking for function "getentropy" : NO 00:02:47.595 Message: lib/eal: Defining dependency "eal" 00:02:47.595 Message: lib/ring: Defining dependency "ring" 00:02:47.595 Message: lib/rcu: Defining dependency "rcu" 00:02:47.595 Message: lib/mempool: Defining dependency "mempool" 00:02:47.595 Message: lib/mbuf: Defining dependency "mbuf" 00:02:47.595 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:47.595 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:47.595 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:47.595 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:47.595 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:47.595 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:47.595 Compiler for C supports arguments -mpclmul: YES 00:02:47.595 Compiler for C supports arguments -maes: YES 00:02:47.595 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:47.595 Compiler for C supports arguments -mavx512bw: YES 00:02:47.595 Compiler for C supports arguments -mavx512dq: YES 00:02:47.595 Compiler for C supports arguments -mavx512vl: YES 00:02:47.595 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:47.595 Compiler for C supports arguments -mavx2: YES 00:02:47.595 Compiler for C supports arguments -mavx: YES 00:02:47.595 Message: lib/net: Defining dependency "net" 00:02:47.595 Message: lib/meter: Defining dependency "meter" 00:02:47.595 Message: lib/ethdev: Defining dependency "ethdev" 00:02:47.595 Message: lib/pci: Defining dependency "pci" 00:02:47.595 Message: lib/cmdline: Defining dependency "cmdline" 00:02:47.595 Message: lib/hash: Defining dependency "hash" 00:02:47.595 Message: lib/timer: Defining dependency "timer" 00:02:47.595 Message: lib/compressdev: Defining dependency "compressdev" 00:02:47.595 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:47.595 Message: lib/dmadev: Defining dependency "dmadev" 00:02:47.595 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:47.595 Message: lib/power: Defining dependency "power" 00:02:47.595 Message: lib/reorder: Defining dependency "reorder" 00:02:47.595 Message: lib/security: Defining dependency "security" 00:02:47.595 Has header "linux/userfaultfd.h" : YES 00:02:47.595 Has header "linux/vduse.h" : YES 00:02:47.595 Message: lib/vhost: Defining dependency "vhost" 00:02:47.595 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:47.595 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:47.595 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:47.595 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:47.595 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:47.595 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:47.595 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:47.595 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:47.595 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:47.595 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:47.595 Program doxygen found: YES (/usr/bin/doxygen) 00:02:47.595 Configuring doxy-api-html.conf using configuration 00:02:47.595 Configuring doxy-api-man.conf using configuration 00:02:47.595 Program mandb found: YES (/usr/bin/mandb) 00:02:47.595 Program sphinx-build found: NO 00:02:47.595 Configuring rte_build_config.h using configuration 00:02:47.595 Message: 00:02:47.595 ================= 00:02:47.595 Applications Enabled 00:02:47.595 ================= 00:02:47.595 00:02:47.595 apps: 00:02:47.595 00:02:47.595 00:02:47.595 Message: 00:02:47.595 ================= 00:02:47.595 Libraries Enabled 00:02:47.595 ================= 00:02:47.595 00:02:47.595 libs: 00:02:47.595 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:47.595 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:47.595 cryptodev, dmadev, power, reorder, security, vhost, 00:02:47.595 00:02:47.595 Message: 00:02:47.595 =============== 00:02:47.595 Drivers Enabled 00:02:47.595 =============== 00:02:47.595 00:02:47.595 common: 00:02:47.595 00:02:47.595 bus: 00:02:47.595 pci, vdev, 00:02:47.595 mempool: 00:02:47.595 ring, 00:02:47.595 dma: 00:02:47.595 00:02:47.595 net: 00:02:47.595 00:02:47.595 crypto: 00:02:47.595 00:02:47.595 compress: 00:02:47.595 00:02:47.595 vdpa: 00:02:47.595 00:02:47.595 00:02:47.595 Message: 00:02:47.595 ================= 00:02:47.595 Content Skipped 00:02:47.595 ================= 00:02:47.595 00:02:47.595 apps: 00:02:47.595 dumpcap: explicitly disabled via build config 00:02:47.595 graph: explicitly disabled via build config 00:02:47.595 pdump: explicitly disabled via build config 00:02:47.595 proc-info: explicitly disabled via build config 00:02:47.595 test-acl: explicitly disabled via build config 00:02:47.595 test-bbdev: explicitly disabled via build config 00:02:47.595 test-cmdline: explicitly disabled via build config 00:02:47.595 test-compress-perf: explicitly disabled via build config 00:02:47.595 test-crypto-perf: explicitly disabled via build config 00:02:47.595 test-dma-perf: explicitly disabled via build config 00:02:47.595 test-eventdev: explicitly disabled via build config 00:02:47.595 test-fib: explicitly disabled via build config 00:02:47.595 test-flow-perf: explicitly disabled via build config 00:02:47.595 test-gpudev: explicitly disabled via build config 00:02:47.595 test-mldev: explicitly disabled via build config 00:02:47.595 test-pipeline: explicitly disabled via build config 00:02:47.595 test-pmd: explicitly disabled via build config 00:02:47.595 test-regex: explicitly disabled via build config 00:02:47.595 test-sad: explicitly disabled via build config 00:02:47.595 test-security-perf: explicitly disabled via build config 00:02:47.595 00:02:47.595 libs: 00:02:47.595 argparse: explicitly disabled via build config 00:02:47.595 metrics: explicitly disabled via build config 00:02:47.595 acl: explicitly disabled via build config 00:02:47.595 bbdev: explicitly disabled via build config 00:02:47.595 bitratestats: explicitly disabled via build config 00:02:47.595 bpf: explicitly disabled via build config 00:02:47.595 cfgfile: explicitly disabled via build config 00:02:47.595 distributor: explicitly disabled via build config 00:02:47.595 efd: explicitly disabled via build config 00:02:47.595 eventdev: explicitly disabled via build config 00:02:47.595 dispatcher: explicitly disabled via build config 00:02:47.595 gpudev: explicitly disabled via build config 00:02:47.595 gro: explicitly disabled via build config 00:02:47.595 gso: explicitly disabled via build config 00:02:47.595 ip_frag: explicitly disabled via build config 00:02:47.595 jobstats: explicitly disabled via build config 00:02:47.595 latencystats: explicitly disabled via build config 00:02:47.595 lpm: explicitly disabled via build config 00:02:47.595 member: explicitly disabled via build config 00:02:47.595 pcapng: explicitly disabled via build config 00:02:47.595 rawdev: explicitly disabled via build config 00:02:47.595 regexdev: explicitly disabled via build config 00:02:47.595 mldev: explicitly disabled via build config 00:02:47.595 rib: explicitly disabled via build config 00:02:47.595 sched: explicitly disabled via build config 00:02:47.595 stack: explicitly disabled via build config 00:02:47.595 ipsec: explicitly disabled via build config 00:02:47.595 pdcp: explicitly disabled via build config 00:02:47.595 fib: explicitly disabled via build config 00:02:47.595 port: explicitly disabled via build config 00:02:47.595 pdump: explicitly disabled via build config 00:02:47.595 table: explicitly disabled via build config 00:02:47.595 pipeline: explicitly disabled via build config 00:02:47.595 graph: explicitly disabled via build config 00:02:47.595 node: explicitly disabled via build config 00:02:47.595 00:02:47.595 drivers: 00:02:47.595 common/cpt: not in enabled drivers build config 00:02:47.595 common/dpaax: not in enabled drivers build config 00:02:47.595 common/iavf: not in enabled drivers build config 00:02:47.595 common/idpf: not in enabled drivers build config 00:02:47.595 common/ionic: not in enabled drivers build config 00:02:47.595 common/mvep: not in enabled drivers build config 00:02:47.595 common/octeontx: not in enabled drivers build config 00:02:47.595 bus/auxiliary: not in enabled drivers build config 00:02:47.595 bus/cdx: not in enabled drivers build config 00:02:47.595 bus/dpaa: not in enabled drivers build config 00:02:47.595 bus/fslmc: not in enabled drivers build config 00:02:47.595 bus/ifpga: not in enabled drivers build config 00:02:47.595 bus/platform: not in enabled drivers build config 00:02:47.595 bus/uacce: not in enabled drivers build config 00:02:47.595 bus/vmbus: not in enabled drivers build config 00:02:47.595 common/cnxk: not in enabled drivers build config 00:02:47.595 common/mlx5: not in enabled drivers build config 00:02:47.595 common/nfp: not in enabled drivers build config 00:02:47.595 common/nitrox: not in enabled drivers build config 00:02:47.595 common/qat: not in enabled drivers build config 00:02:47.595 common/sfc_efx: not in enabled drivers build config 00:02:47.595 mempool/bucket: not in enabled drivers build config 00:02:47.595 mempool/cnxk: not in enabled drivers build config 00:02:47.595 mempool/dpaa: not in enabled drivers build config 00:02:47.596 mempool/dpaa2: not in enabled drivers build config 00:02:47.596 mempool/octeontx: not in enabled drivers build config 00:02:47.596 mempool/stack: not in enabled drivers build config 00:02:47.596 dma/cnxk: not in enabled drivers build config 00:02:47.596 dma/dpaa: not in enabled drivers build config 00:02:47.596 dma/dpaa2: not in enabled drivers build config 00:02:47.596 dma/hisilicon: not in enabled drivers build config 00:02:47.596 dma/idxd: not in enabled drivers build config 00:02:47.596 dma/ioat: not in enabled drivers build config 00:02:47.596 dma/skeleton: not in enabled drivers build config 00:02:47.596 net/af_packet: not in enabled drivers build config 00:02:47.596 net/af_xdp: not in enabled drivers build config 00:02:47.596 net/ark: not in enabled drivers build config 00:02:47.596 net/atlantic: not in enabled drivers build config 00:02:47.596 net/avp: not in enabled drivers build config 00:02:47.596 net/axgbe: not in enabled drivers build config 00:02:47.596 net/bnx2x: not in enabled drivers build config 00:02:47.596 net/bnxt: not in enabled drivers build config 00:02:47.596 net/bonding: not in enabled drivers build config 00:02:47.596 net/cnxk: not in enabled drivers build config 00:02:47.596 net/cpfl: not in enabled drivers build config 00:02:47.596 net/cxgbe: not in enabled drivers build config 00:02:47.596 net/dpaa: not in enabled drivers build config 00:02:47.596 net/dpaa2: not in enabled drivers build config 00:02:47.596 net/e1000: not in enabled drivers build config 00:02:47.596 net/ena: not in enabled drivers build config 00:02:47.596 net/enetc: not in enabled drivers build config 00:02:47.596 net/enetfec: not in enabled drivers build config 00:02:47.596 net/enic: not in enabled drivers build config 00:02:47.596 net/failsafe: not in enabled drivers build config 00:02:47.596 net/fm10k: not in enabled drivers build config 00:02:47.596 net/gve: not in enabled drivers build config 00:02:47.596 net/hinic: not in enabled drivers build config 00:02:47.596 net/hns3: not in enabled drivers build config 00:02:47.596 net/i40e: not in enabled drivers build config 00:02:47.596 net/iavf: not in enabled drivers build config 00:02:47.596 net/ice: not in enabled drivers build config 00:02:47.596 net/idpf: not in enabled drivers build config 00:02:47.596 net/igc: not in enabled drivers build config 00:02:47.596 net/ionic: not in enabled drivers build config 00:02:47.596 net/ipn3ke: not in enabled drivers build config 00:02:47.596 net/ixgbe: not in enabled drivers build config 00:02:47.596 net/mana: not in enabled drivers build config 00:02:47.596 net/memif: not in enabled drivers build config 00:02:47.596 net/mlx4: not in enabled drivers build config 00:02:47.596 net/mlx5: not in enabled drivers build config 00:02:47.596 net/mvneta: not in enabled drivers build config 00:02:47.596 net/mvpp2: not in enabled drivers build config 00:02:47.596 net/netvsc: not in enabled drivers build config 00:02:47.596 net/nfb: not in enabled drivers build config 00:02:47.596 net/nfp: not in enabled drivers build config 00:02:47.596 net/ngbe: not in enabled drivers build config 00:02:47.596 net/null: not in enabled drivers build config 00:02:47.596 net/octeontx: not in enabled drivers build config 00:02:47.596 net/octeon_ep: not in enabled drivers build config 00:02:47.596 net/pcap: not in enabled drivers build config 00:02:47.596 net/pfe: not in enabled drivers build config 00:02:47.596 net/qede: not in enabled drivers build config 00:02:47.596 net/ring: not in enabled drivers build config 00:02:47.596 net/sfc: not in enabled drivers build config 00:02:47.596 net/softnic: not in enabled drivers build config 00:02:47.596 net/tap: not in enabled drivers build config 00:02:47.596 net/thunderx: not in enabled drivers build config 00:02:47.596 net/txgbe: not in enabled drivers build config 00:02:47.596 net/vdev_netvsc: not in enabled drivers build config 00:02:47.596 net/vhost: not in enabled drivers build config 00:02:47.596 net/virtio: not in enabled drivers build config 00:02:47.596 net/vmxnet3: not in enabled drivers build config 00:02:47.596 raw/*: missing internal dependency, "rawdev" 00:02:47.596 crypto/armv8: not in enabled drivers build config 00:02:47.596 crypto/bcmfs: not in enabled drivers build config 00:02:47.596 crypto/caam_jr: not in enabled drivers build config 00:02:47.596 crypto/ccp: not in enabled drivers build config 00:02:47.596 crypto/cnxk: not in enabled drivers build config 00:02:47.596 crypto/dpaa_sec: not in enabled drivers build config 00:02:47.596 crypto/dpaa2_sec: not in enabled drivers build config 00:02:47.596 crypto/ipsec_mb: not in enabled drivers build config 00:02:47.596 crypto/mlx5: not in enabled drivers build config 00:02:47.596 crypto/mvsam: not in enabled drivers build config 00:02:47.596 crypto/nitrox: not in enabled drivers build config 00:02:47.596 crypto/null: not in enabled drivers build config 00:02:47.596 crypto/octeontx: not in enabled drivers build config 00:02:47.596 crypto/openssl: not in enabled drivers build config 00:02:47.596 crypto/scheduler: not in enabled drivers build config 00:02:47.596 crypto/uadk: not in enabled drivers build config 00:02:47.596 crypto/virtio: not in enabled drivers build config 00:02:47.596 compress/isal: not in enabled drivers build config 00:02:47.596 compress/mlx5: not in enabled drivers build config 00:02:47.596 compress/nitrox: not in enabled drivers build config 00:02:47.596 compress/octeontx: not in enabled drivers build config 00:02:47.596 compress/zlib: not in enabled drivers build config 00:02:47.596 regex/*: missing internal dependency, "regexdev" 00:02:47.596 ml/*: missing internal dependency, "mldev" 00:02:47.596 vdpa/ifc: not in enabled drivers build config 00:02:47.596 vdpa/mlx5: not in enabled drivers build config 00:02:47.596 vdpa/nfp: not in enabled drivers build config 00:02:47.596 vdpa/sfc: not in enabled drivers build config 00:02:47.596 event/*: missing internal dependency, "eventdev" 00:02:47.596 baseband/*: missing internal dependency, "bbdev" 00:02:47.596 gpu/*: missing internal dependency, "gpudev" 00:02:47.596 00:02:47.596 00:02:47.596 Build targets in project: 84 00:02:47.596 00:02:47.596 DPDK 24.03.0 00:02:47.596 00:02:47.596 User defined options 00:02:47.596 buildtype : debug 00:02:47.596 default_library : shared 00:02:47.596 libdir : lib 00:02:47.596 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:47.596 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:47.596 c_link_args : 00:02:47.596 cpu_instruction_set: native 00:02:47.596 disable_apps : proc-info,test-fib,graph,test-dma-perf,test-mldev,test,test-regex,dumpcap,test-cmdline,test-acl,test-pipeline,test-flow-perf,pdump,test-sad,test-gpudev,test-security-perf,test-crypto-perf,test-bbdev,test-pmd,test-compress-perf,test-eventdev 00:02:47.596 disable_libs : bbdev,fib,dispatcher,distributor,bpf,latencystats,graph,mldev,efd,eventdev,gso,gpudev,acl,pipeline,stack,jobstats,ipsec,argparse,rib,pdcp,table,pdump,cfgfile,gro,pcapng,bitratestats,ip_frag,member,sched,node,port,metrics,lpm,regexdev,rawdev 00:02:47.596 enable_docs : false 00:02:47.596 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:47.596 enable_kmods : false 00:02:47.596 max_lcores : 128 00:02:47.596 tests : false 00:02:47.596 00:02:47.596 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:47.596 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:47.596 [1/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:47.596 [2/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:47.596 [3/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:47.596 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:47.596 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:47.596 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:47.596 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:47.596 [8/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:47.596 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:47.596 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:47.596 [11/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:47.596 [12/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:47.596 [13/267] Linking static target lib/librte_pci.a 00:02:47.596 [14/267] Linking static target lib/librte_log.a 00:02:47.596 [15/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:47.596 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:47.596 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:47.596 [18/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:47.596 [19/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:47.596 [20/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:47.596 [21/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:47.596 [22/267] Linking static target lib/librte_kvargs.a 00:02:47.596 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:47.596 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:47.596 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:47.596 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:47.596 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:47.857 [28/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:47.857 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:47.857 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:47.857 [31/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:47.857 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:47.857 [33/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:47.857 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:47.857 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:47.857 [36/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:47.857 [37/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:47.857 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:47.857 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:47.857 [40/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:47.857 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:47.857 [42/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:47.857 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:47.857 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:47.857 [45/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:47.857 [46/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:47.857 [47/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:47.857 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:47.857 [49/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:47.857 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:47.857 [51/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:47.857 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:47.857 [53/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:47.857 [54/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:47.857 [55/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:47.857 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:47.857 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:47.857 [58/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:47.857 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:47.857 [60/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:47.857 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:47.857 [62/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:47.857 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:47.857 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:47.857 [65/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:47.857 [66/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.857 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:47.857 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:47.857 [69/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:47.857 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:47.857 [71/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:47.857 [72/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:47.857 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:47.857 [74/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:47.857 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:47.857 [76/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:47.857 [77/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:48.115 [78/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:48.115 [79/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:48.115 [80/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:48.115 [81/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:48.115 [82/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:48.115 [83/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:48.115 [84/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:48.115 [85/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:48.115 [86/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.115 [87/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:48.115 [88/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:48.115 [89/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:48.115 [90/267] Linking static target lib/librte_dmadev.a 00:02:48.115 [91/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:48.115 [92/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:48.115 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:48.115 [94/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:48.115 [95/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:48.115 [96/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:48.115 [97/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:48.115 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:48.115 [99/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:48.115 [100/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:48.115 [101/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:48.115 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:48.115 [103/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:48.115 [104/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:48.115 [105/267] Linking static target lib/librte_net.a 00:02:48.115 [106/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:48.115 [107/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:48.115 [108/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:48.116 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:48.116 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:48.116 [111/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:48.116 [112/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:48.116 [113/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:48.116 [114/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:48.116 [115/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:48.116 [116/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:48.116 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:48.116 [118/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:48.116 [119/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:48.116 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:48.116 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:48.116 [122/267] Linking static target lib/librte_telemetry.a 00:02:48.116 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:48.116 [124/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:48.116 [125/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:48.116 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:48.116 [127/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:48.116 [128/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:48.116 [129/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:48.116 [130/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:48.116 [131/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:48.116 [132/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:48.116 [133/267] Linking static target lib/librte_meter.a 00:02:48.375 [134/267] Linking static target lib/librte_rcu.a 00:02:48.375 [135/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:48.375 [136/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:48.375 [137/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:48.375 [138/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:48.375 [139/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:48.375 [140/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:48.375 [141/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:48.375 [142/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.375 [143/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:48.375 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:48.375 [145/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:48.375 [146/267] Linking static target lib/librte_reorder.a 00:02:48.375 [147/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:48.375 [148/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:48.375 [149/267] Linking static target lib/librte_ring.a 00:02:48.375 [150/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:48.375 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:48.375 [152/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:48.375 [153/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:48.375 [154/267] Linking static target lib/librte_timer.a 00:02:48.375 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:48.375 [156/267] Linking target lib/librte_log.so.24.1 00:02:48.375 [157/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:48.375 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:48.375 [159/267] Linking static target lib/librte_cmdline.a 00:02:48.375 [160/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:48.375 [161/267] Linking static target lib/librte_compressdev.a 00:02:48.375 [162/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:48.375 [163/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:48.375 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:48.375 [165/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:48.375 [166/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:48.375 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:48.375 [168/267] Linking static target lib/librte_mbuf.a 00:02:48.375 [169/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:48.375 [170/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:48.375 [171/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:48.375 [172/267] Linking static target lib/librte_mempool.a 00:02:48.375 [173/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:48.375 [174/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:48.375 [175/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:48.375 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:48.375 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:48.375 [178/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:48.375 [179/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:48.375 [180/267] Linking static target lib/librte_power.a 00:02:48.375 [181/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:48.375 [182/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:48.375 [183/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:48.375 [184/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:48.375 [185/267] Linking static target lib/librte_eal.a 00:02:48.375 [186/267] Linking static target drivers/librte_bus_vdev.a 00:02:48.375 [187/267] Linking static target lib/librte_hash.a 00:02:48.375 [188/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:48.375 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:48.375 [190/267] Linking target lib/librte_kvargs.so.24.1 00:02:48.375 [191/267] Linking static target lib/librte_security.a 00:02:48.375 [192/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.375 [193/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:48.375 [194/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:48.375 [195/267] Linking static target drivers/librte_mempool_ring.a 00:02:48.375 [196/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:48.636 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:48.636 [198/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.636 [199/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:48.636 [200/267] Linking static target lib/librte_cryptodev.a 00:02:48.636 [201/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:48.636 [202/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.636 [203/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:48.636 [204/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:48.636 [205/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:48.636 [206/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:48.636 [207/267] Linking static target drivers/librte_bus_pci.a 00:02:48.636 [208/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.636 [209/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.636 [210/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.896 [211/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.896 [212/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.896 [213/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.896 [214/267] Linking target lib/librte_telemetry.so.24.1 00:02:48.896 [215/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:48.896 [216/267] Linking static target lib/librte_ethdev.a 00:02:48.896 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:48.896 [218/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:49.156 [219/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.156 [220/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.156 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.415 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.415 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.415 [224/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.415 [225/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.675 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.936 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:49.936 [228/267] Linking static target lib/librte_vhost.a 00:02:50.877 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.262 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.845 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.786 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.786 [233/267] Linking target lib/librte_eal.so.24.1 00:03:00.048 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:00.048 [235/267] Linking target lib/librte_meter.so.24.1 00:03:00.048 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:00.048 [237/267] Linking target lib/librte_ring.so.24.1 00:03:00.048 [238/267] Linking target lib/librte_pci.so.24.1 00:03:00.048 [239/267] Linking target lib/librte_timer.so.24.1 00:03:00.048 [240/267] Linking target lib/librte_dmadev.so.24.1 00:03:00.048 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:00.048 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:00.048 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:00.048 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:00.048 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:00.048 [246/267] Linking target lib/librte_mempool.so.24.1 00:03:00.048 [247/267] Linking target lib/librte_rcu.so.24.1 00:03:00.309 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:00.310 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:00.310 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:00.310 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:00.310 [252/267] Linking target lib/librte_mbuf.so.24.1 00:03:00.571 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:00.571 [254/267] Linking target lib/librte_net.so.24.1 00:03:00.571 [255/267] Linking target lib/librte_compressdev.so.24.1 00:03:00.571 [256/267] Linking target lib/librte_reorder.so.24.1 00:03:00.571 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:03:00.571 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:00.831 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:00.831 [260/267] Linking target lib/librte_cmdline.so.24.1 00:03:00.831 [261/267] Linking target lib/librte_hash.so.24.1 00:03:00.831 [262/267] Linking target lib/librte_security.so.24.1 00:03:00.832 [263/267] Linking target lib/librte_ethdev.so.24.1 00:03:00.832 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:00.832 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:01.092 [266/267] Linking target lib/librte_power.so.24.1 00:03:01.092 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:01.092 INFO: autodetecting backend as ninja 00:03:01.092 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:02.035 CC lib/ut_mock/mock.o 00:03:02.035 CC lib/log/log.o 00:03:02.035 CC lib/log/log_flags.o 00:03:02.035 CC lib/log/log_deprecated.o 00:03:02.035 CC lib/ut/ut.o 00:03:02.297 LIB libspdk_ut_mock.a 00:03:02.297 LIB libspdk_log.a 00:03:02.297 LIB libspdk_ut.a 00:03:02.297 SO libspdk_ut_mock.so.6.0 00:03:02.297 SO libspdk_log.so.7.0 00:03:02.297 SO libspdk_ut.so.2.0 00:03:02.297 SYMLINK libspdk_ut_mock.so 00:03:02.297 SYMLINK libspdk_log.so 00:03:02.297 SYMLINK libspdk_ut.so 00:03:02.868 CC lib/ioat/ioat.o 00:03:02.869 CC lib/dma/dma.o 00:03:02.869 CC lib/util/base64.o 00:03:02.869 CXX lib/trace_parser/trace.o 00:03:02.869 CC lib/util/bit_array.o 00:03:02.869 CC lib/util/cpuset.o 00:03:02.869 CC lib/util/crc16.o 00:03:02.869 CC lib/util/crc32.o 00:03:02.869 CC lib/util/crc32c.o 00:03:02.869 CC lib/util/crc32_ieee.o 00:03:02.869 CC lib/util/dif.o 00:03:02.869 CC lib/util/crc64.o 00:03:02.869 CC lib/util/fd.o 00:03:02.869 CC lib/util/file.o 00:03:02.869 CC lib/util/hexlify.o 00:03:02.869 CC lib/util/iov.o 00:03:02.869 CC lib/util/math.o 00:03:02.869 CC lib/util/pipe.o 00:03:02.869 CC lib/util/strerror_tls.o 00:03:02.869 CC lib/util/string.o 00:03:02.869 CC lib/util/uuid.o 00:03:02.869 CC lib/util/fd_group.o 00:03:02.869 CC lib/util/xor.o 00:03:02.869 CC lib/util/zipf.o 00:03:02.869 CC lib/vfio_user/host/vfio_user_pci.o 00:03:02.869 CC lib/vfio_user/host/vfio_user.o 00:03:02.869 LIB libspdk_dma.a 00:03:02.869 SO libspdk_dma.so.4.0 00:03:03.130 LIB libspdk_ioat.a 00:03:03.130 SO libspdk_ioat.so.7.0 00:03:03.130 SYMLINK libspdk_dma.so 00:03:03.130 SYMLINK libspdk_ioat.so 00:03:03.130 LIB libspdk_vfio_user.a 00:03:03.130 SO libspdk_vfio_user.so.5.0 00:03:03.130 LIB libspdk_util.a 00:03:03.392 SYMLINK libspdk_vfio_user.so 00:03:03.392 SO libspdk_util.so.9.1 00:03:03.392 SYMLINK libspdk_util.so 00:03:03.653 LIB libspdk_trace_parser.a 00:03:03.653 SO libspdk_trace_parser.so.5.0 00:03:03.653 SYMLINK libspdk_trace_parser.so 00:03:03.913 CC lib/vmd/vmd.o 00:03:03.913 CC lib/vmd/led.o 00:03:03.913 CC lib/json/json_parse.o 00:03:03.913 CC lib/json/json_util.o 00:03:03.913 CC lib/json/json_write.o 00:03:03.913 CC lib/env_dpdk/env.o 00:03:03.913 CC lib/env_dpdk/memory.o 00:03:03.913 CC lib/idxd/idxd.o 00:03:03.913 CC lib/env_dpdk/pci.o 00:03:03.913 CC lib/idxd/idxd_user.o 00:03:03.913 CC lib/env_dpdk/init.o 00:03:03.913 CC lib/rdma_provider/common.o 00:03:03.913 CC lib/conf/conf.o 00:03:03.913 CC lib/env_dpdk/threads.o 00:03:03.913 CC lib/idxd/idxd_kernel.o 00:03:03.913 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:03.913 CC lib/env_dpdk/pci_ioat.o 00:03:03.913 CC lib/rdma_utils/rdma_utils.o 00:03:03.913 CC lib/env_dpdk/pci_virtio.o 00:03:03.913 CC lib/env_dpdk/pci_vmd.o 00:03:03.913 CC lib/env_dpdk/pci_idxd.o 00:03:03.913 CC lib/env_dpdk/pci_event.o 00:03:03.913 CC lib/env_dpdk/sigbus_handler.o 00:03:03.913 CC lib/env_dpdk/pci_dpdk.o 00:03:03.913 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:03.913 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:03.913 LIB libspdk_rdma_provider.a 00:03:03.913 SO libspdk_rdma_provider.so.6.0 00:03:04.173 LIB libspdk_conf.a 00:03:04.173 LIB libspdk_rdma_utils.a 00:03:04.173 SO libspdk_conf.so.6.0 00:03:04.173 LIB libspdk_json.a 00:03:04.173 SYMLINK libspdk_rdma_provider.so 00:03:04.173 SO libspdk_rdma_utils.so.1.0 00:03:04.173 SO libspdk_json.so.6.0 00:03:04.173 SYMLINK libspdk_conf.so 00:03:04.173 SYMLINK libspdk_rdma_utils.so 00:03:04.173 SYMLINK libspdk_json.so 00:03:04.435 LIB libspdk_idxd.a 00:03:04.435 LIB libspdk_vmd.a 00:03:04.435 SO libspdk_idxd.so.12.0 00:03:04.435 SO libspdk_vmd.so.6.0 00:03:04.435 SYMLINK libspdk_idxd.so 00:03:04.435 SYMLINK libspdk_vmd.so 00:03:04.696 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:04.696 CC lib/jsonrpc/jsonrpc_server.o 00:03:04.696 CC lib/jsonrpc/jsonrpc_client.o 00:03:04.696 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:04.957 LIB libspdk_jsonrpc.a 00:03:04.957 SO libspdk_jsonrpc.so.6.0 00:03:04.957 SYMLINK libspdk_jsonrpc.so 00:03:04.957 LIB libspdk_env_dpdk.a 00:03:04.957 SO libspdk_env_dpdk.so.14.1 00:03:05.218 SYMLINK libspdk_env_dpdk.so 00:03:05.218 CC lib/rpc/rpc.o 00:03:05.479 LIB libspdk_rpc.a 00:03:05.479 SO libspdk_rpc.so.6.0 00:03:05.800 SYMLINK libspdk_rpc.so 00:03:06.062 CC lib/notify/notify.o 00:03:06.062 CC lib/notify/notify_rpc.o 00:03:06.062 CC lib/trace/trace.o 00:03:06.062 CC lib/trace/trace_flags.o 00:03:06.062 CC lib/trace/trace_rpc.o 00:03:06.062 CC lib/keyring/keyring.o 00:03:06.062 CC lib/keyring/keyring_rpc.o 00:03:06.062 LIB libspdk_notify.a 00:03:06.062 SO libspdk_notify.so.6.0 00:03:06.062 LIB libspdk_keyring.a 00:03:06.323 LIB libspdk_trace.a 00:03:06.323 SYMLINK libspdk_notify.so 00:03:06.323 SO libspdk_keyring.so.1.0 00:03:06.323 SO libspdk_trace.so.10.0 00:03:06.323 SYMLINK libspdk_keyring.so 00:03:06.323 SYMLINK libspdk_trace.so 00:03:06.584 CC lib/thread/thread.o 00:03:06.584 CC lib/sock/sock.o 00:03:06.584 CC lib/thread/iobuf.o 00:03:06.584 CC lib/sock/sock_rpc.o 00:03:07.156 LIB libspdk_sock.a 00:03:07.156 SO libspdk_sock.so.10.0 00:03:07.156 SYMLINK libspdk_sock.so 00:03:07.417 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:07.417 CC lib/nvme/nvme_ctrlr.o 00:03:07.417 CC lib/nvme/nvme_ns_cmd.o 00:03:07.417 CC lib/nvme/nvme_fabric.o 00:03:07.417 CC lib/nvme/nvme_pcie_common.o 00:03:07.417 CC lib/nvme/nvme_ns.o 00:03:07.417 CC lib/nvme/nvme_pcie.o 00:03:07.417 CC lib/nvme/nvme_qpair.o 00:03:07.417 CC lib/nvme/nvme.o 00:03:07.417 CC lib/nvme/nvme_quirks.o 00:03:07.417 CC lib/nvme/nvme_transport.o 00:03:07.417 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:07.417 CC lib/nvme/nvme_discovery.o 00:03:07.417 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:07.417 CC lib/nvme/nvme_tcp.o 00:03:07.417 CC lib/nvme/nvme_opal.o 00:03:07.417 CC lib/nvme/nvme_io_msg.o 00:03:07.417 CC lib/nvme/nvme_poll_group.o 00:03:07.417 CC lib/nvme/nvme_zns.o 00:03:07.417 CC lib/nvme/nvme_stubs.o 00:03:07.417 CC lib/nvme/nvme_auth.o 00:03:07.417 CC lib/nvme/nvme_cuse.o 00:03:07.417 CC lib/nvme/nvme_vfio_user.o 00:03:07.418 CC lib/nvme/nvme_rdma.o 00:03:07.987 LIB libspdk_thread.a 00:03:07.987 SO libspdk_thread.so.10.1 00:03:07.987 SYMLINK libspdk_thread.so 00:03:08.247 CC lib/accel/accel.o 00:03:08.247 CC lib/accel/accel_rpc.o 00:03:08.247 CC lib/accel/accel_sw.o 00:03:08.247 CC lib/blob/blobstore.o 00:03:08.247 CC lib/blob/blob_bs_dev.o 00:03:08.247 CC lib/blob/request.o 00:03:08.247 CC lib/blob/zeroes.o 00:03:08.247 CC lib/virtio/virtio.o 00:03:08.247 CC lib/virtio/virtio_vhost_user.o 00:03:08.247 CC lib/virtio/virtio_vfio_user.o 00:03:08.247 CC lib/virtio/virtio_pci.o 00:03:08.247 CC lib/init/json_config.o 00:03:08.247 CC lib/init/subsystem.o 00:03:08.247 CC lib/init/subsystem_rpc.o 00:03:08.247 CC lib/init/rpc.o 00:03:08.508 CC lib/vfu_tgt/tgt_rpc.o 00:03:08.508 CC lib/vfu_tgt/tgt_endpoint.o 00:03:08.508 LIB libspdk_init.a 00:03:08.770 SO libspdk_init.so.5.0 00:03:08.770 LIB libspdk_virtio.a 00:03:08.770 LIB libspdk_vfu_tgt.a 00:03:08.770 SO libspdk_virtio.so.7.0 00:03:08.770 SYMLINK libspdk_init.so 00:03:08.770 SO libspdk_vfu_tgt.so.3.0 00:03:08.770 SYMLINK libspdk_virtio.so 00:03:08.770 SYMLINK libspdk_vfu_tgt.so 00:03:09.031 CC lib/event/app.o 00:03:09.031 CC lib/event/reactor.o 00:03:09.031 CC lib/event/log_rpc.o 00:03:09.031 CC lib/event/app_rpc.o 00:03:09.031 CC lib/event/scheduler_static.o 00:03:09.292 LIB libspdk_accel.a 00:03:09.292 SO libspdk_accel.so.15.1 00:03:09.292 LIB libspdk_nvme.a 00:03:09.292 SYMLINK libspdk_accel.so 00:03:09.292 SO libspdk_nvme.so.13.1 00:03:09.552 LIB libspdk_event.a 00:03:09.552 SO libspdk_event.so.14.0 00:03:09.552 SYMLINK libspdk_event.so 00:03:09.552 CC lib/bdev/bdev.o 00:03:09.552 CC lib/bdev/bdev_rpc.o 00:03:09.552 CC lib/bdev/bdev_zone.o 00:03:09.552 CC lib/bdev/part.o 00:03:09.552 CC lib/bdev/scsi_nvme.o 00:03:09.814 SYMLINK libspdk_nvme.so 00:03:10.758 LIB libspdk_blob.a 00:03:10.758 SO libspdk_blob.so.11.0 00:03:11.019 SYMLINK libspdk_blob.so 00:03:11.280 CC lib/lvol/lvol.o 00:03:11.280 CC lib/blobfs/blobfs.o 00:03:11.280 CC lib/blobfs/tree.o 00:03:11.853 LIB libspdk_bdev.a 00:03:11.853 SO libspdk_bdev.so.15.1 00:03:12.115 SYMLINK libspdk_bdev.so 00:03:12.115 LIB libspdk_blobfs.a 00:03:12.115 SO libspdk_blobfs.so.10.0 00:03:12.115 LIB libspdk_lvol.a 00:03:12.115 SYMLINK libspdk_blobfs.so 00:03:12.115 SO libspdk_lvol.so.10.0 00:03:12.115 SYMLINK libspdk_lvol.so 00:03:12.376 CC lib/ftl/ftl_core.o 00:03:12.376 CC lib/nvmf/ctrlr.o 00:03:12.376 CC lib/ftl/ftl_init.o 00:03:12.376 CC lib/ftl/ftl_layout.o 00:03:12.376 CC lib/nvmf/ctrlr_discovery.o 00:03:12.376 CC lib/ftl/ftl_debug.o 00:03:12.376 CC lib/nvmf/ctrlr_bdev.o 00:03:12.376 CC lib/ftl/ftl_io.o 00:03:12.376 CC lib/nvmf/subsystem.o 00:03:12.376 CC lib/ftl/ftl_sb.o 00:03:12.376 CC lib/nvmf/nvmf.o 00:03:12.376 CC lib/ftl/ftl_l2p.o 00:03:12.376 CC lib/nvmf/nvmf_rpc.o 00:03:12.376 CC lib/ftl/ftl_l2p_flat.o 00:03:12.376 CC lib/nvmf/transport.o 00:03:12.376 CC lib/ftl/ftl_nv_cache.o 00:03:12.376 CC lib/nvmf/tcp.o 00:03:12.376 CC lib/nvmf/stubs.o 00:03:12.376 CC lib/ftl/ftl_band.o 00:03:12.376 CC lib/nvmf/mdns_server.o 00:03:12.376 CC lib/ftl/ftl_band_ops.o 00:03:12.376 CC lib/nbd/nbd.o 00:03:12.376 CC lib/ftl/ftl_writer.o 00:03:12.376 CC lib/nvmf/vfio_user.o 00:03:12.376 CC lib/nbd/nbd_rpc.o 00:03:12.376 CC lib/ftl/ftl_rq.o 00:03:12.376 CC lib/nvmf/rdma.o 00:03:12.376 CC lib/nvmf/auth.o 00:03:12.376 CC lib/ftl/ftl_reloc.o 00:03:12.376 CC lib/ublk/ublk.o 00:03:12.376 CC lib/ublk/ublk_rpc.o 00:03:12.376 CC lib/ftl/ftl_l2p_cache.o 00:03:12.376 CC lib/scsi/dev.o 00:03:12.376 CC lib/ftl/ftl_p2l.o 00:03:12.376 CC lib/scsi/lun.o 00:03:12.376 CC lib/scsi/port.o 00:03:12.376 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:12.376 CC lib/ftl/mngt/ftl_mngt.o 00:03:12.376 CC lib/scsi/scsi.o 00:03:12.376 CC lib/scsi/scsi_bdev.o 00:03:12.376 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:12.376 CC lib/scsi/scsi_pr.o 00:03:12.376 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:12.376 CC lib/scsi/scsi_rpc.o 00:03:12.376 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:12.376 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:12.376 CC lib/scsi/task.o 00:03:12.376 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:12.376 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:12.376 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:12.376 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:12.376 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:12.376 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:12.376 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:12.376 CC lib/ftl/utils/ftl_conf.o 00:03:12.376 CC lib/ftl/utils/ftl_md.o 00:03:12.376 CC lib/ftl/utils/ftl_mempool.o 00:03:12.376 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:12.376 CC lib/ftl/utils/ftl_bitmap.o 00:03:12.376 CC lib/ftl/utils/ftl_property.o 00:03:12.376 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:12.376 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:12.376 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:12.376 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:12.376 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:12.376 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:12.376 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:12.376 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:12.376 CC lib/ftl/base/ftl_base_dev.o 00:03:12.376 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:12.376 CC lib/ftl/ftl_trace.o 00:03:12.376 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:12.376 CC lib/ftl/base/ftl_base_bdev.o 00:03:12.947 LIB libspdk_nbd.a 00:03:12.947 LIB libspdk_scsi.a 00:03:12.947 SO libspdk_nbd.so.7.0 00:03:12.947 SO libspdk_scsi.so.9.0 00:03:12.947 SYMLINK libspdk_nbd.so 00:03:13.209 SYMLINK libspdk_scsi.so 00:03:13.209 LIB libspdk_ublk.a 00:03:13.209 SO libspdk_ublk.so.3.0 00:03:13.209 SYMLINK libspdk_ublk.so 00:03:13.209 LIB libspdk_ftl.a 00:03:13.469 CC lib/vhost/vhost.o 00:03:13.469 CC lib/vhost/vhost_rpc.o 00:03:13.469 CC lib/vhost/vhost_scsi.o 00:03:13.469 CC lib/vhost/vhost_blk.o 00:03:13.469 CC lib/vhost/rte_vhost_user.o 00:03:13.469 SO libspdk_ftl.so.9.0 00:03:13.469 CC lib/iscsi/conn.o 00:03:13.469 CC lib/iscsi/init_grp.o 00:03:13.469 CC lib/iscsi/param.o 00:03:13.469 CC lib/iscsi/iscsi.o 00:03:13.469 CC lib/iscsi/md5.o 00:03:13.469 CC lib/iscsi/portal_grp.o 00:03:13.469 CC lib/iscsi/iscsi_rpc.o 00:03:13.469 CC lib/iscsi/tgt_node.o 00:03:13.469 CC lib/iscsi/iscsi_subsystem.o 00:03:13.469 CC lib/iscsi/task.o 00:03:13.740 SYMLINK libspdk_ftl.so 00:03:14.315 LIB libspdk_nvmf.a 00:03:14.315 LIB libspdk_vhost.a 00:03:14.315 SO libspdk_nvmf.so.19.0 00:03:14.315 SO libspdk_vhost.so.8.0 00:03:14.576 SYMLINK libspdk_vhost.so 00:03:14.576 SYMLINK libspdk_nvmf.so 00:03:14.576 LIB libspdk_iscsi.a 00:03:14.576 SO libspdk_iscsi.so.8.0 00:03:14.836 SYMLINK libspdk_iscsi.so 00:03:15.408 CC module/vfu_device/vfu_virtio.o 00:03:15.408 CC module/vfu_device/vfu_virtio_blk.o 00:03:15.408 CC module/vfu_device/vfu_virtio_scsi.o 00:03:15.408 CC module/vfu_device/vfu_virtio_rpc.o 00:03:15.408 CC module/env_dpdk/env_dpdk_rpc.o 00:03:15.668 CC module/accel/ioat/accel_ioat.o 00:03:15.668 CC module/accel/ioat/accel_ioat_rpc.o 00:03:15.668 CC module/accel/error/accel_error.o 00:03:15.668 CC module/accel/error/accel_error_rpc.o 00:03:15.668 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:15.668 LIB libspdk_env_dpdk_rpc.a 00:03:15.668 CC module/accel/iaa/accel_iaa.o 00:03:15.668 CC module/accel/iaa/accel_iaa_rpc.o 00:03:15.668 CC module/accel/dsa/accel_dsa.o 00:03:15.668 CC module/accel/dsa/accel_dsa_rpc.o 00:03:15.668 CC module/keyring/file/keyring.o 00:03:15.668 CC module/sock/posix/posix.o 00:03:15.668 CC module/keyring/file/keyring_rpc.o 00:03:15.668 CC module/keyring/linux/keyring.o 00:03:15.668 CC module/keyring/linux/keyring_rpc.o 00:03:15.668 CC module/blob/bdev/blob_bdev.o 00:03:15.668 CC module/scheduler/gscheduler/gscheduler.o 00:03:15.668 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:15.668 SO libspdk_env_dpdk_rpc.so.6.0 00:03:15.668 SYMLINK libspdk_env_dpdk_rpc.so 00:03:15.668 LIB libspdk_keyring_linux.a 00:03:15.668 LIB libspdk_accel_ioat.a 00:03:15.668 LIB libspdk_scheduler_gscheduler.a 00:03:15.668 LIB libspdk_accel_error.a 00:03:15.668 LIB libspdk_keyring_file.a 00:03:15.668 SO libspdk_keyring_linux.so.1.0 00:03:15.668 LIB libspdk_accel_iaa.a 00:03:15.668 SO libspdk_accel_error.so.2.0 00:03:15.668 SO libspdk_accel_ioat.so.6.0 00:03:15.668 SO libspdk_keyring_file.so.1.0 00:03:15.668 LIB libspdk_scheduler_dpdk_governor.a 00:03:15.668 LIB libspdk_scheduler_dynamic.a 00:03:15.668 SO libspdk_scheduler_gscheduler.so.4.0 00:03:15.929 LIB libspdk_accel_dsa.a 00:03:15.929 SO libspdk_accel_iaa.so.3.0 00:03:15.929 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:15.929 SO libspdk_scheduler_dynamic.so.4.0 00:03:15.929 SYMLINK libspdk_scheduler_gscheduler.so 00:03:15.929 SYMLINK libspdk_keyring_linux.so 00:03:15.929 LIB libspdk_blob_bdev.a 00:03:15.929 SYMLINK libspdk_accel_error.so 00:03:15.929 SYMLINK libspdk_keyring_file.so 00:03:15.929 SYMLINK libspdk_accel_ioat.so 00:03:15.929 SO libspdk_accel_dsa.so.5.0 00:03:15.929 SYMLINK libspdk_scheduler_dynamic.so 00:03:15.929 SO libspdk_blob_bdev.so.11.0 00:03:15.929 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:15.929 SYMLINK libspdk_accel_iaa.so 00:03:15.929 SYMLINK libspdk_accel_dsa.so 00:03:15.929 LIB libspdk_vfu_device.a 00:03:15.929 SYMLINK libspdk_blob_bdev.so 00:03:15.929 SO libspdk_vfu_device.so.3.0 00:03:16.189 SYMLINK libspdk_vfu_device.so 00:03:16.189 LIB libspdk_sock_posix.a 00:03:16.189 SO libspdk_sock_posix.so.6.0 00:03:16.448 SYMLINK libspdk_sock_posix.so 00:03:16.448 CC module/bdev/delay/vbdev_delay.o 00:03:16.448 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:16.448 CC module/bdev/passthru/vbdev_passthru.o 00:03:16.448 CC module/bdev/malloc/bdev_malloc.o 00:03:16.448 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:16.448 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:16.448 CC module/bdev/gpt/gpt.o 00:03:16.448 CC module/bdev/gpt/vbdev_gpt.o 00:03:16.448 CC module/bdev/aio/bdev_aio.o 00:03:16.448 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:16.448 CC module/bdev/aio/bdev_aio_rpc.o 00:03:16.448 CC module/bdev/lvol/vbdev_lvol.o 00:03:16.448 CC module/bdev/error/vbdev_error.o 00:03:16.448 CC module/bdev/nvme/bdev_nvme.o 00:03:16.448 CC module/bdev/error/vbdev_error_rpc.o 00:03:16.448 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:16.448 CC module/bdev/nvme/nvme_rpc.o 00:03:16.448 CC module/bdev/null/bdev_null.o 00:03:16.448 CC module/bdev/null/bdev_null_rpc.o 00:03:16.448 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:16.448 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:16.448 CC module/blobfs/bdev/blobfs_bdev.o 00:03:16.448 CC module/bdev/nvme/bdev_mdns_client.o 00:03:16.448 CC module/bdev/nvme/vbdev_opal.o 00:03:16.448 CC module/bdev/iscsi/bdev_iscsi.o 00:03:16.448 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:16.448 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:16.448 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:16.448 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:16.448 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:16.448 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:16.448 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:16.448 CC module/bdev/ftl/bdev_ftl.o 00:03:16.448 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:16.448 CC module/bdev/split/vbdev_split.o 00:03:16.448 CC module/bdev/split/vbdev_split_rpc.o 00:03:16.448 CC module/bdev/raid/bdev_raid.o 00:03:16.448 CC module/bdev/raid/bdev_raid_rpc.o 00:03:16.448 CC module/bdev/raid/bdev_raid_sb.o 00:03:16.448 CC module/bdev/raid/raid0.o 00:03:16.448 CC module/bdev/raid/raid1.o 00:03:16.448 CC module/bdev/raid/concat.o 00:03:16.707 LIB libspdk_bdev_split.a 00:03:16.707 LIB libspdk_blobfs_bdev.a 00:03:16.707 SO libspdk_bdev_split.so.6.0 00:03:16.707 SO libspdk_blobfs_bdev.so.6.0 00:03:16.707 LIB libspdk_bdev_null.a 00:03:16.707 LIB libspdk_bdev_gpt.a 00:03:16.707 LIB libspdk_bdev_error.a 00:03:16.707 LIB libspdk_bdev_ftl.a 00:03:16.707 LIB libspdk_bdev_passthru.a 00:03:16.707 SO libspdk_bdev_null.so.6.0 00:03:16.707 SO libspdk_bdev_gpt.so.6.0 00:03:16.707 LIB libspdk_bdev_aio.a 00:03:16.707 SYMLINK libspdk_bdev_split.so 00:03:16.707 SO libspdk_bdev_ftl.so.6.0 00:03:16.707 SO libspdk_bdev_passthru.so.6.0 00:03:16.707 SYMLINK libspdk_blobfs_bdev.so 00:03:16.967 SO libspdk_bdev_error.so.6.0 00:03:16.967 LIB libspdk_bdev_malloc.a 00:03:16.967 SO libspdk_bdev_aio.so.6.0 00:03:16.967 LIB libspdk_bdev_zone_block.a 00:03:16.967 LIB libspdk_bdev_delay.a 00:03:16.967 SO libspdk_bdev_malloc.so.6.0 00:03:16.967 LIB libspdk_bdev_iscsi.a 00:03:16.967 SYMLINK libspdk_bdev_gpt.so 00:03:16.967 SYMLINK libspdk_bdev_null.so 00:03:16.967 SO libspdk_bdev_zone_block.so.6.0 00:03:16.967 SYMLINK libspdk_bdev_ftl.so 00:03:16.967 SYMLINK libspdk_bdev_passthru.so 00:03:16.967 SYMLINK libspdk_bdev_error.so 00:03:16.967 SO libspdk_bdev_delay.so.6.0 00:03:16.967 SO libspdk_bdev_iscsi.so.6.0 00:03:16.967 SYMLINK libspdk_bdev_aio.so 00:03:16.967 SYMLINK libspdk_bdev_malloc.so 00:03:16.967 LIB libspdk_bdev_lvol.a 00:03:16.967 SYMLINK libspdk_bdev_zone_block.so 00:03:16.967 SYMLINK libspdk_bdev_delay.so 00:03:16.967 SO libspdk_bdev_lvol.so.6.0 00:03:16.967 SYMLINK libspdk_bdev_iscsi.so 00:03:16.967 LIB libspdk_bdev_virtio.a 00:03:16.967 SO libspdk_bdev_virtio.so.6.0 00:03:16.967 SYMLINK libspdk_bdev_lvol.so 00:03:17.227 SYMLINK libspdk_bdev_virtio.so 00:03:17.487 LIB libspdk_bdev_raid.a 00:03:17.487 SO libspdk_bdev_raid.so.6.0 00:03:17.487 SYMLINK libspdk_bdev_raid.so 00:03:18.426 LIB libspdk_bdev_nvme.a 00:03:18.426 SO libspdk_bdev_nvme.so.7.0 00:03:18.426 SYMLINK libspdk_bdev_nvme.so 00:03:19.370 CC module/event/subsystems/scheduler/scheduler.o 00:03:19.370 CC module/event/subsystems/vmd/vmd.o 00:03:19.370 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:19.370 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:19.370 CC module/event/subsystems/iobuf/iobuf.o 00:03:19.370 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:19.370 CC module/event/subsystems/sock/sock.o 00:03:19.370 CC module/event/subsystems/keyring/keyring.o 00:03:19.370 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:19.370 LIB libspdk_event_scheduler.a 00:03:19.370 LIB libspdk_event_vmd.a 00:03:19.370 LIB libspdk_event_vhost_blk.a 00:03:19.370 LIB libspdk_event_vfu_tgt.a 00:03:19.370 LIB libspdk_event_sock.a 00:03:19.370 LIB libspdk_event_keyring.a 00:03:19.370 SO libspdk_event_scheduler.so.4.0 00:03:19.370 SO libspdk_event_vmd.so.6.0 00:03:19.370 LIB libspdk_event_iobuf.a 00:03:19.370 SO libspdk_event_vhost_blk.so.3.0 00:03:19.370 SO libspdk_event_vfu_tgt.so.3.0 00:03:19.370 SO libspdk_event_sock.so.5.0 00:03:19.370 SO libspdk_event_keyring.so.1.0 00:03:19.370 SO libspdk_event_iobuf.so.3.0 00:03:19.370 SYMLINK libspdk_event_scheduler.so 00:03:19.370 SYMLINK libspdk_event_vmd.so 00:03:19.370 SYMLINK libspdk_event_vhost_blk.so 00:03:19.631 SYMLINK libspdk_event_vfu_tgt.so 00:03:19.631 SYMLINK libspdk_event_keyring.so 00:03:19.631 SYMLINK libspdk_event_sock.so 00:03:19.631 SYMLINK libspdk_event_iobuf.so 00:03:19.891 CC module/event/subsystems/accel/accel.o 00:03:19.891 LIB libspdk_event_accel.a 00:03:19.891 SO libspdk_event_accel.so.6.0 00:03:20.151 SYMLINK libspdk_event_accel.so 00:03:20.412 CC module/event/subsystems/bdev/bdev.o 00:03:20.673 LIB libspdk_event_bdev.a 00:03:20.673 SO libspdk_event_bdev.so.6.0 00:03:20.673 SYMLINK libspdk_event_bdev.so 00:03:20.933 CC module/event/subsystems/nbd/nbd.o 00:03:21.193 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:21.193 CC module/event/subsystems/ublk/ublk.o 00:03:21.193 CC module/event/subsystems/scsi/scsi.o 00:03:21.193 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:21.193 LIB libspdk_event_nbd.a 00:03:21.193 LIB libspdk_event_ublk.a 00:03:21.193 SO libspdk_event_nbd.so.6.0 00:03:21.193 LIB libspdk_event_scsi.a 00:03:21.193 SO libspdk_event_ublk.so.3.0 00:03:21.193 SO libspdk_event_scsi.so.6.0 00:03:21.193 SYMLINK libspdk_event_nbd.so 00:03:21.193 LIB libspdk_event_nvmf.a 00:03:21.454 SYMLINK libspdk_event_ublk.so 00:03:21.454 SO libspdk_event_nvmf.so.6.0 00:03:21.454 SYMLINK libspdk_event_scsi.so 00:03:21.454 SYMLINK libspdk_event_nvmf.so 00:03:21.714 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:21.714 CC module/event/subsystems/iscsi/iscsi.o 00:03:21.974 LIB libspdk_event_vhost_scsi.a 00:03:21.974 LIB libspdk_event_iscsi.a 00:03:21.974 SO libspdk_event_vhost_scsi.so.3.0 00:03:21.974 SO libspdk_event_iscsi.so.6.0 00:03:21.974 SYMLINK libspdk_event_vhost_scsi.so 00:03:21.974 SYMLINK libspdk_event_iscsi.so 00:03:22.235 SO libspdk.so.6.0 00:03:22.235 SYMLINK libspdk.so 00:03:22.496 TEST_HEADER include/spdk/accel.h 00:03:22.496 CC app/trace_record/trace_record.o 00:03:22.496 CC app/spdk_top/spdk_top.o 00:03:22.496 CXX app/trace/trace.o 00:03:22.496 TEST_HEADER include/spdk/assert.h 00:03:22.496 TEST_HEADER include/spdk/accel_module.h 00:03:22.496 TEST_HEADER include/spdk/barrier.h 00:03:22.496 TEST_HEADER include/spdk/bdev.h 00:03:22.496 TEST_HEADER include/spdk/base64.h 00:03:22.496 CC test/rpc_client/rpc_client_test.o 00:03:22.496 TEST_HEADER include/spdk/bdev_module.h 00:03:22.496 TEST_HEADER include/spdk/bit_array.h 00:03:22.496 TEST_HEADER include/spdk/bdev_zone.h 00:03:22.496 TEST_HEADER include/spdk/bit_pool.h 00:03:22.496 TEST_HEADER include/spdk/blob_bdev.h 00:03:22.496 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:22.496 CC app/spdk_nvme_discover/discovery_aer.o 00:03:22.496 CC app/spdk_nvme_perf/perf.o 00:03:22.496 TEST_HEADER include/spdk/blobfs.h 00:03:22.496 TEST_HEADER include/spdk/blob.h 00:03:22.496 TEST_HEADER include/spdk/conf.h 00:03:22.496 CC app/spdk_nvme_identify/identify.o 00:03:22.496 CC app/spdk_lspci/spdk_lspci.o 00:03:22.496 TEST_HEADER include/spdk/config.h 00:03:22.496 TEST_HEADER include/spdk/cpuset.h 00:03:22.496 TEST_HEADER include/spdk/crc16.h 00:03:22.496 TEST_HEADER include/spdk/crc32.h 00:03:22.496 TEST_HEADER include/spdk/crc64.h 00:03:22.496 TEST_HEADER include/spdk/dif.h 00:03:22.496 TEST_HEADER include/spdk/dma.h 00:03:22.496 TEST_HEADER include/spdk/endian.h 00:03:22.496 TEST_HEADER include/spdk/env.h 00:03:22.496 TEST_HEADER include/spdk/env_dpdk.h 00:03:22.496 TEST_HEADER include/spdk/event.h 00:03:22.496 TEST_HEADER include/spdk/fd_group.h 00:03:22.496 TEST_HEADER include/spdk/file.h 00:03:22.496 TEST_HEADER include/spdk/fd.h 00:03:22.496 TEST_HEADER include/spdk/ftl.h 00:03:22.496 TEST_HEADER include/spdk/gpt_spec.h 00:03:22.496 TEST_HEADER include/spdk/hexlify.h 00:03:22.496 TEST_HEADER include/spdk/histogram_data.h 00:03:22.496 TEST_HEADER include/spdk/idxd.h 00:03:22.496 TEST_HEADER include/spdk/init.h 00:03:22.496 TEST_HEADER include/spdk/idxd_spec.h 00:03:22.496 TEST_HEADER include/spdk/ioat.h 00:03:22.496 TEST_HEADER include/spdk/ioat_spec.h 00:03:22.496 TEST_HEADER include/spdk/iscsi_spec.h 00:03:22.496 TEST_HEADER include/spdk/json.h 00:03:22.496 TEST_HEADER include/spdk/jsonrpc.h 00:03:22.496 TEST_HEADER include/spdk/keyring_module.h 00:03:22.496 TEST_HEADER include/spdk/keyring.h 00:03:22.496 TEST_HEADER include/spdk/likely.h 00:03:22.496 TEST_HEADER include/spdk/log.h 00:03:22.496 TEST_HEADER include/spdk/lvol.h 00:03:22.496 TEST_HEADER include/spdk/memory.h 00:03:22.496 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:22.496 CC app/spdk_dd/spdk_dd.o 00:03:22.496 TEST_HEADER include/spdk/nbd.h 00:03:22.496 TEST_HEADER include/spdk/mmio.h 00:03:22.496 TEST_HEADER include/spdk/notify.h 00:03:22.496 TEST_HEADER include/spdk/nvme_intel.h 00:03:22.496 TEST_HEADER include/spdk/nvme.h 00:03:22.496 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:22.496 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:22.765 CC app/nvmf_tgt/nvmf_main.o 00:03:22.765 TEST_HEADER include/spdk/nvme_spec.h 00:03:22.765 TEST_HEADER include/spdk/nvme_zns.h 00:03:22.765 CC app/iscsi_tgt/iscsi_tgt.o 00:03:22.765 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:22.765 TEST_HEADER include/spdk/nvmf.h 00:03:22.765 TEST_HEADER include/spdk/nvmf_spec.h 00:03:22.765 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:22.765 TEST_HEADER include/spdk/opal.h 00:03:22.765 TEST_HEADER include/spdk/nvmf_transport.h 00:03:22.765 TEST_HEADER include/spdk/pci_ids.h 00:03:22.765 TEST_HEADER include/spdk/opal_spec.h 00:03:22.765 CC app/spdk_tgt/spdk_tgt.o 00:03:22.765 TEST_HEADER include/spdk/pipe.h 00:03:22.765 TEST_HEADER include/spdk/reduce.h 00:03:22.765 TEST_HEADER include/spdk/queue.h 00:03:22.765 TEST_HEADER include/spdk/rpc.h 00:03:22.765 TEST_HEADER include/spdk/scsi.h 00:03:22.765 TEST_HEADER include/spdk/scheduler.h 00:03:22.765 TEST_HEADER include/spdk/scsi_spec.h 00:03:22.765 TEST_HEADER include/spdk/sock.h 00:03:22.765 TEST_HEADER include/spdk/string.h 00:03:22.765 TEST_HEADER include/spdk/stdinc.h 00:03:22.765 TEST_HEADER include/spdk/trace.h 00:03:22.765 TEST_HEADER include/spdk/thread.h 00:03:22.765 TEST_HEADER include/spdk/trace_parser.h 00:03:22.765 TEST_HEADER include/spdk/tree.h 00:03:22.765 TEST_HEADER include/spdk/ublk.h 00:03:22.765 TEST_HEADER include/spdk/version.h 00:03:22.765 TEST_HEADER include/spdk/uuid.h 00:03:22.765 TEST_HEADER include/spdk/util.h 00:03:22.765 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:22.765 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:22.765 TEST_HEADER include/spdk/vmd.h 00:03:22.765 TEST_HEADER include/spdk/vhost.h 00:03:22.765 TEST_HEADER include/spdk/zipf.h 00:03:22.765 TEST_HEADER include/spdk/xor.h 00:03:22.765 CXX test/cpp_headers/accel_module.o 00:03:22.765 CXX test/cpp_headers/accel.o 00:03:22.765 CXX test/cpp_headers/barrier.o 00:03:22.765 CXX test/cpp_headers/assert.o 00:03:22.765 CXX test/cpp_headers/base64.o 00:03:22.765 CXX test/cpp_headers/bdev.o 00:03:22.765 CXX test/cpp_headers/bdev_module.o 00:03:22.765 CXX test/cpp_headers/bdev_zone.o 00:03:22.765 CXX test/cpp_headers/bit_pool.o 00:03:22.765 CXX test/cpp_headers/bit_array.o 00:03:22.765 CXX test/cpp_headers/blob_bdev.o 00:03:22.765 CXX test/cpp_headers/blobfs_bdev.o 00:03:22.765 CXX test/cpp_headers/blobfs.o 00:03:22.765 CXX test/cpp_headers/config.o 00:03:22.765 CXX test/cpp_headers/blob.o 00:03:22.765 CXX test/cpp_headers/cpuset.o 00:03:22.765 CXX test/cpp_headers/crc16.o 00:03:22.765 CXX test/cpp_headers/conf.o 00:03:22.765 CXX test/cpp_headers/crc32.o 00:03:22.765 CXX test/cpp_headers/crc64.o 00:03:22.765 CXX test/cpp_headers/endian.o 00:03:22.765 CXX test/cpp_headers/dif.o 00:03:22.765 CXX test/cpp_headers/dma.o 00:03:22.765 CXX test/cpp_headers/event.o 00:03:22.765 CXX test/cpp_headers/env.o 00:03:22.765 CXX test/cpp_headers/fd_group.o 00:03:22.765 CXX test/cpp_headers/fd.o 00:03:22.765 CXX test/cpp_headers/file.o 00:03:22.765 CXX test/cpp_headers/env_dpdk.o 00:03:22.765 CXX test/cpp_headers/gpt_spec.o 00:03:22.765 CXX test/cpp_headers/hexlify.o 00:03:22.765 CXX test/cpp_headers/histogram_data.o 00:03:22.765 CXX test/cpp_headers/ftl.o 00:03:22.765 CXX test/cpp_headers/idxd.o 00:03:22.765 CXX test/cpp_headers/init.o 00:03:22.765 CXX test/cpp_headers/ioat_spec.o 00:03:22.765 CXX test/cpp_headers/idxd_spec.o 00:03:22.765 CXX test/cpp_headers/ioat.o 00:03:22.765 CXX test/cpp_headers/json.o 00:03:22.765 CXX test/cpp_headers/iscsi_spec.o 00:03:22.765 CXX test/cpp_headers/jsonrpc.o 00:03:22.765 CXX test/cpp_headers/keyring.o 00:03:22.765 CXX test/cpp_headers/keyring_module.o 00:03:22.765 CXX test/cpp_headers/likely.o 00:03:22.765 CXX test/cpp_headers/log.o 00:03:22.765 CC test/thread/poller_perf/poller_perf.o 00:03:22.765 CXX test/cpp_headers/mmio.o 00:03:22.765 CXX test/cpp_headers/lvol.o 00:03:22.765 CXX test/cpp_headers/nvme.o 00:03:22.765 CXX test/cpp_headers/notify.o 00:03:22.765 CXX test/cpp_headers/nvme_intel.o 00:03:22.765 CXX test/cpp_headers/memory.o 00:03:22.765 CXX test/cpp_headers/nbd.o 00:03:22.765 CXX test/cpp_headers/nvme_spec.o 00:03:22.765 CXX test/cpp_headers/nvme_zns.o 00:03:22.765 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:22.765 CXX test/cpp_headers/nvmf.o 00:03:22.765 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:22.765 CXX test/cpp_headers/nvmf_transport.o 00:03:22.765 CXX test/cpp_headers/pci_ids.o 00:03:22.765 CXX test/cpp_headers/opal_spec.o 00:03:22.765 CXX test/cpp_headers/nvme_ocssd.o 00:03:22.765 CXX test/cpp_headers/queue.o 00:03:22.765 CXX test/cpp_headers/pipe.o 00:03:22.765 CXX test/cpp_headers/scsi.o 00:03:22.765 CXX test/cpp_headers/opal.o 00:03:22.765 CXX test/cpp_headers/rpc.o 00:03:22.765 CXX test/cpp_headers/nvmf_spec.o 00:03:22.765 CXX test/cpp_headers/scheduler.o 00:03:22.765 CXX test/cpp_headers/scsi_spec.o 00:03:22.765 CXX test/cpp_headers/sock.o 00:03:22.765 CXX test/cpp_headers/stdinc.o 00:03:22.765 CXX test/cpp_headers/nvmf_cmd.o 00:03:22.765 CXX test/cpp_headers/string.o 00:03:22.765 CXX test/cpp_headers/tree.o 00:03:22.765 CXX test/cpp_headers/trace_parser.o 00:03:22.765 CXX test/cpp_headers/util.o 00:03:22.766 CXX test/cpp_headers/ublk.o 00:03:22.766 CC test/app/jsoncat/jsoncat.o 00:03:22.766 CXX test/cpp_headers/uuid.o 00:03:22.766 CXX test/cpp_headers/reduce.o 00:03:22.766 CXX test/cpp_headers/thread.o 00:03:22.766 CXX test/cpp_headers/version.o 00:03:22.766 CXX test/cpp_headers/trace.o 00:03:22.766 CXX test/cpp_headers/vfio_user_pci.o 00:03:22.766 CXX test/cpp_headers/vhost.o 00:03:22.766 CXX test/cpp_headers/vfio_user_spec.o 00:03:22.766 CC test/env/memory/memory_ut.o 00:03:22.766 CXX test/cpp_headers/vmd.o 00:03:22.766 CXX test/cpp_headers/zipf.o 00:03:22.766 CXX test/cpp_headers/xor.o 00:03:22.766 CC test/env/pci/pci_ut.o 00:03:22.766 CC test/dma/test_dma/test_dma.o 00:03:22.766 CC test/env/vtophys/vtophys.o 00:03:22.766 CC test/app/bdev_svc/bdev_svc.o 00:03:22.766 CC test/app/stub/stub.o 00:03:22.766 LINK spdk_lspci 00:03:22.766 CC examples/ioat/perf/perf.o 00:03:22.766 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:23.033 CC test/app/histogram_perf/histogram_perf.o 00:03:23.033 CC examples/util/zipf/zipf.o 00:03:23.033 CC app/fio/nvme/fio_plugin.o 00:03:23.033 LINK spdk_nvme_discover 00:03:23.033 CC app/fio/bdev/fio_plugin.o 00:03:23.033 CC examples/ioat/verify/verify.o 00:03:23.033 LINK spdk_trace_record 00:03:23.033 LINK interrupt_tgt 00:03:23.033 CC test/env/mem_callbacks/mem_callbacks.o 00:03:23.033 LINK rpc_client_test 00:03:23.293 LINK spdk_tgt 00:03:23.293 LINK env_dpdk_post_init 00:03:23.293 LINK ioat_perf 00:03:23.293 LINK jsoncat 00:03:23.293 LINK vtophys 00:03:23.293 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:23.293 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:23.293 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:23.293 LINK stub 00:03:23.293 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:23.293 LINK nvmf_tgt 00:03:23.293 LINK spdk_dd 00:03:23.293 LINK verify 00:03:23.293 LINK iscsi_tgt 00:03:23.553 LINK test_dma 00:03:23.553 LINK pci_ut 00:03:23.554 LINK poller_perf 00:03:23.554 LINK zipf 00:03:23.554 LINK histogram_perf 00:03:23.554 LINK spdk_trace 00:03:23.554 LINK spdk_nvme 00:03:23.554 LINK bdev_svc 00:03:23.554 LINK spdk_nvme_perf 00:03:23.554 LINK vhost_fuzz 00:03:23.554 LINK nvme_fuzz 00:03:23.813 LINK mem_callbacks 00:03:23.813 LINK spdk_bdev 00:03:23.813 LINK spdk_top 00:03:23.813 CC app/vhost/vhost.o 00:03:24.073 CC test/nvme/overhead/overhead.o 00:03:24.073 CC test/nvme/connect_stress/connect_stress.o 00:03:24.073 CC test/nvme/sgl/sgl.o 00:03:24.073 CC test/nvme/aer/aer.o 00:03:24.073 CC test/nvme/fdp/fdp.o 00:03:24.073 CC test/nvme/e2edp/nvme_dp.o 00:03:24.073 CC test/nvme/reserve/reserve.o 00:03:24.073 CC test/nvme/reset/reset.o 00:03:24.073 CC test/nvme/startup/startup.o 00:03:24.073 CC test/nvme/compliance/nvme_compliance.o 00:03:24.073 LINK spdk_nvme_identify 00:03:24.073 CC test/nvme/cuse/cuse.o 00:03:24.073 CC test/nvme/err_injection/err_injection.o 00:03:24.073 CC test/nvme/fused_ordering/fused_ordering.o 00:03:24.073 CC test/nvme/boot_partition/boot_partition.o 00:03:24.073 CC examples/vmd/lsvmd/lsvmd.o 00:03:24.073 CC test/nvme/simple_copy/simple_copy.o 00:03:24.073 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:24.073 CC test/event/reactor/reactor.o 00:03:24.073 CC examples/idxd/perf/perf.o 00:03:24.073 CC examples/vmd/led/led.o 00:03:24.073 CC test/event/reactor_perf/reactor_perf.o 00:03:24.073 CC test/event/event_perf/event_perf.o 00:03:24.073 CC examples/sock/hello_world/hello_sock.o 00:03:24.073 CC test/event/app_repeat/app_repeat.o 00:03:24.073 CC test/accel/dif/dif.o 00:03:24.073 CC test/blobfs/mkfs/mkfs.o 00:03:24.073 CC examples/thread/thread/thread_ex.o 00:03:24.073 CC test/event/scheduler/scheduler.o 00:03:24.073 LINK vhost 00:03:24.073 LINK reactor 00:03:24.073 CC test/lvol/esnap/esnap.o 00:03:24.073 LINK lsvmd 00:03:24.073 LINK boot_partition 00:03:24.334 LINK reset 00:03:24.334 LINK reactor_perf 00:03:24.334 LINK startup 00:03:24.334 LINK connect_stress 00:03:24.334 LINK event_perf 00:03:24.334 LINK err_injection 00:03:24.334 LINK led 00:03:24.334 LINK reserve 00:03:24.334 LINK app_repeat 00:03:24.334 LINK doorbell_aers 00:03:24.334 LINK overhead 00:03:24.334 LINK fused_ordering 00:03:24.334 LINK simple_copy 00:03:24.334 LINK sgl 00:03:24.334 LINK nvme_dp 00:03:24.334 LINK aer 00:03:24.334 LINK hello_sock 00:03:24.334 LINK mkfs 00:03:24.334 LINK nvme_compliance 00:03:24.334 LINK fdp 00:03:24.334 LINK iscsi_fuzz 00:03:24.334 LINK scheduler 00:03:24.334 LINK idxd_perf 00:03:24.334 LINK thread 00:03:24.334 LINK memory_ut 00:03:24.594 LINK dif 00:03:24.855 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:24.856 CC examples/nvme/hello_world/hello_world.o 00:03:24.856 CC examples/nvme/abort/abort.o 00:03:24.856 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:24.856 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:24.856 CC examples/nvme/reconnect/reconnect.o 00:03:24.856 CC examples/nvme/hotplug/hotplug.o 00:03:24.856 CC examples/nvme/arbitration/arbitration.o 00:03:24.856 CC examples/accel/perf/accel_perf.o 00:03:24.856 CC examples/blob/cli/blobcli.o 00:03:24.856 CC examples/blob/hello_world/hello_blob.o 00:03:24.856 LINK pmr_persistence 00:03:25.117 LINK cmb_copy 00:03:25.117 LINK hello_world 00:03:25.117 LINK hotplug 00:03:25.117 CC test/bdev/bdevio/bdevio.o 00:03:25.117 LINK cuse 00:03:25.117 LINK arbitration 00:03:25.117 LINK reconnect 00:03:25.117 LINK abort 00:03:25.117 LINK hello_blob 00:03:25.117 LINK nvme_manage 00:03:25.378 LINK accel_perf 00:03:25.378 LINK blobcli 00:03:25.378 LINK bdevio 00:03:25.951 CC examples/bdev/hello_world/hello_bdev.o 00:03:25.951 CC examples/bdev/bdevperf/bdevperf.o 00:03:26.213 LINK hello_bdev 00:03:26.473 LINK bdevperf 00:03:27.045 CC examples/nvmf/nvmf/nvmf.o 00:03:27.616 LINK nvmf 00:03:28.210 LINK esnap 00:03:28.782 00:03:28.782 real 0m50.365s 00:03:28.782 user 6m25.760s 00:03:28.782 sys 4m1.917s 00:03:28.782 15:08:38 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:28.782 15:08:38 make -- common/autotest_common.sh@10 -- $ set +x 00:03:28.782 ************************************ 00:03:28.782 END TEST make 00:03:28.782 ************************************ 00:03:28.782 15:08:38 -- common/autotest_common.sh@1142 -- $ return 0 00:03:28.782 15:08:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:28.782 15:08:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:28.782 15:08:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:28.782 15:08:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.782 15:08:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:28.782 15:08:38 -- pm/common@44 -- $ pid=352947 00:03:28.782 15:08:38 -- pm/common@50 -- $ kill -TERM 352947 00:03:28.782 15:08:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.782 15:08:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:28.782 15:08:38 -- pm/common@44 -- $ pid=352948 00:03:28.782 15:08:38 -- pm/common@50 -- $ kill -TERM 352948 00:03:28.782 15:08:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.782 15:08:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:28.782 15:08:38 -- pm/common@44 -- $ pid=352950 00:03:28.782 15:08:38 -- pm/common@50 -- $ kill -TERM 352950 00:03:28.782 15:08:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.782 15:08:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:28.782 15:08:38 -- pm/common@44 -- $ pid=352977 00:03:28.782 15:08:38 -- pm/common@50 -- $ sudo -E kill -TERM 352977 00:03:28.782 15:08:38 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:28.782 15:08:38 -- nvmf/common.sh@7 -- # uname -s 00:03:28.782 15:08:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:28.782 15:08:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:28.782 15:08:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:28.782 15:08:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:28.782 15:08:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:28.782 15:08:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:28.782 15:08:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:28.782 15:08:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:28.782 15:08:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:28.782 15:08:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:28.782 15:08:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:03:28.782 15:08:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:03:28.782 15:08:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:28.782 15:08:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:28.782 15:08:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:28.782 15:08:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:28.782 15:08:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:28.782 15:08:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:28.782 15:08:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:28.782 15:08:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:28.782 15:08:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.782 15:08:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.782 15:08:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.782 15:08:38 -- paths/export.sh@5 -- # export PATH 00:03:28.782 15:08:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.782 15:08:38 -- nvmf/common.sh@47 -- # : 0 00:03:28.782 15:08:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:28.782 15:08:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:28.782 15:08:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:28.782 15:08:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:28.782 15:08:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:28.782 15:08:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:28.782 15:08:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:28.782 15:08:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:28.782 15:08:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:28.782 15:08:38 -- spdk/autotest.sh@32 -- # uname -s 00:03:28.782 15:08:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:28.782 15:08:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:28.782 15:08:38 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:28.782 15:08:38 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:28.782 15:08:38 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:28.782 15:08:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:28.782 15:08:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:28.782 15:08:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:28.782 15:08:38 -- spdk/autotest.sh@48 -- # udevadm_pid=416039 00:03:28.782 15:08:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:28.782 15:08:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:28.782 15:08:38 -- pm/common@17 -- # local monitor 00:03:28.782 15:08:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.782 15:08:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.782 15:08:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.782 15:08:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:28.782 15:08:38 -- pm/common@21 -- # date +%s 00:03:28.782 15:08:38 -- pm/common@21 -- # date +%s 00:03:28.782 15:08:38 -- pm/common@25 -- # sleep 1 00:03:28.782 15:08:38 -- pm/common@21 -- # date +%s 00:03:28.782 15:08:38 -- pm/common@21 -- # date +%s 00:03:28.782 15:08:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721048918 00:03:28.782 15:08:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721048918 00:03:28.782 15:08:38 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721048918 00:03:28.782 15:08:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721048918 00:03:28.782 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721048918_collect-vmstat.pm.log 00:03:29.043 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721048918_collect-cpu-load.pm.log 00:03:29.043 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721048918_collect-cpu-temp.pm.log 00:03:29.043 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721048918_collect-bmc-pm.bmc.pm.log 00:03:29.988 15:08:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:29.988 15:08:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:29.988 15:08:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:29.988 15:08:39 -- common/autotest_common.sh@10 -- # set +x 00:03:29.988 15:08:39 -- spdk/autotest.sh@59 -- # create_test_list 00:03:29.988 15:08:39 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:29.988 15:08:39 -- common/autotest_common.sh@10 -- # set +x 00:03:29.988 15:08:39 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:29.988 15:08:39 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:29.988 15:08:39 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:29.988 15:08:39 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:29.988 15:08:39 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:29.988 15:08:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:29.988 15:08:39 -- common/autotest_common.sh@1455 -- # uname 00:03:29.988 15:08:39 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:29.988 15:08:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:29.988 15:08:39 -- common/autotest_common.sh@1475 -- # uname 00:03:29.988 15:08:39 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:29.988 15:08:39 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:29.988 15:08:39 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:29.988 15:08:39 -- spdk/autotest.sh@72 -- # hash lcov 00:03:29.988 15:08:39 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:29.988 15:08:39 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:29.988 --rc lcov_branch_coverage=1 00:03:29.988 --rc lcov_function_coverage=1 00:03:29.988 --rc genhtml_branch_coverage=1 00:03:29.988 --rc genhtml_function_coverage=1 00:03:29.988 --rc genhtml_legend=1 00:03:29.988 --rc geninfo_all_blocks=1 00:03:29.988 ' 00:03:29.988 15:08:39 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:29.988 --rc lcov_branch_coverage=1 00:03:29.988 --rc lcov_function_coverage=1 00:03:29.988 --rc genhtml_branch_coverage=1 00:03:29.988 --rc genhtml_function_coverage=1 00:03:29.988 --rc genhtml_legend=1 00:03:29.988 --rc geninfo_all_blocks=1 00:03:29.988 ' 00:03:29.988 15:08:39 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:29.988 --rc lcov_branch_coverage=1 00:03:29.988 --rc lcov_function_coverage=1 00:03:29.988 --rc genhtml_branch_coverage=1 00:03:29.988 --rc genhtml_function_coverage=1 00:03:29.988 --rc genhtml_legend=1 00:03:29.988 --rc geninfo_all_blocks=1 00:03:29.988 --no-external' 00:03:29.988 15:08:39 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:29.988 --rc lcov_branch_coverage=1 00:03:29.988 --rc lcov_function_coverage=1 00:03:29.988 --rc genhtml_branch_coverage=1 00:03:29.988 --rc genhtml_function_coverage=1 00:03:29.988 --rc genhtml_legend=1 00:03:29.988 --rc geninfo_all_blocks=1 00:03:29.988 --no-external' 00:03:29.988 15:08:39 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:29.988 lcov: LCOV version 1.14 00:03:29.988 15:08:39 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:39.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:39.977 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:48.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:48.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:48.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:48.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:48.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:48.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:48.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:48.109 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:48.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:48.369 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:48.629 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:48.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:48.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:48.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:48.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:48.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:48.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:48.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:48.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:48.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:48.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:48.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:48.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:48.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:48.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:48.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:48.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:48.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:48.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:48.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:48.890 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:48.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:48.890 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:48.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:48.890 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:48.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:48.890 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:48.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:48.890 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:48.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:48.890 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:48.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:48.890 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:48.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:48.890 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:48.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:48.890 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:48.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:48.890 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:48.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:48.890 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:48.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:48.890 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:48.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:48.891 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:48.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:48.891 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:48.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:48.891 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:48.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:48.891 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:48.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:48.891 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:48.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:48.891 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:48.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:48.891 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:48.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:48.891 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:48.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:48.891 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:48.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:48.891 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:48.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:48.891 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:48.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:48.891 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:48.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:48.891 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:48.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:48.891 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:48.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:48.891 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:58.891 15:09:08 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:58.891 15:09:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:58.891 15:09:08 -- common/autotest_common.sh@10 -- # set +x 00:03:58.891 15:09:08 -- spdk/autotest.sh@91 -- # rm -f 00:03:58.891 15:09:08 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.192 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:02.192 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:02.192 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:02.192 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:02.192 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:02.453 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:02.453 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:02.453 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:02.453 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:02.453 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:02.453 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:02.453 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:02.453 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:02.453 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:02.453 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:02.453 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:02.713 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:02.713 15:09:12 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:02.714 15:09:12 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:02.714 15:09:12 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:02.714 15:09:12 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:02.714 15:09:12 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:02.714 15:09:12 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:02.714 15:09:12 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:02.714 15:09:12 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:02.714 15:09:12 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:02.714 15:09:12 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:02.714 15:09:12 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:02.714 15:09:12 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:02.714 15:09:12 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:02.714 15:09:12 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:02.714 15:09:12 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:02.714 No valid GPT data, bailing 00:04:02.714 15:09:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:02.714 15:09:12 -- scripts/common.sh@391 -- # pt= 00:04:02.714 15:09:12 -- scripts/common.sh@392 -- # return 1 00:04:02.714 15:09:12 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:02.714 1+0 records in 00:04:02.714 1+0 records out 00:04:02.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0021365 s, 491 MB/s 00:04:02.714 15:09:12 -- spdk/autotest.sh@118 -- # sync 00:04:02.714 15:09:12 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:02.714 15:09:12 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:02.714 15:09:12 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:10.850 15:09:20 -- spdk/autotest.sh@124 -- # uname -s 00:04:10.850 15:09:20 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:10.850 15:09:20 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:10.850 15:09:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.850 15:09:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.850 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:04:10.850 ************************************ 00:04:10.850 START TEST setup.sh 00:04:10.850 ************************************ 00:04:10.850 15:09:20 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:10.850 * Looking for test storage... 00:04:10.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:10.850 15:09:20 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:10.850 15:09:20 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:10.850 15:09:20 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:10.850 15:09:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.850 15:09:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.850 15:09:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.850 ************************************ 00:04:10.850 START TEST acl 00:04:10.850 ************************************ 00:04:10.850 15:09:20 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:10.850 * Looking for test storage... 00:04:10.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:10.850 15:09:20 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:10.850 15:09:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:10.850 15:09:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:10.850 15:09:20 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:10.850 15:09:20 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:10.850 15:09:20 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:10.850 15:09:20 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:10.850 15:09:20 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:10.850 15:09:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:10.850 15:09:20 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:10.850 15:09:20 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:10.850 15:09:20 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:10.850 15:09:20 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:10.850 15:09:20 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:10.850 15:09:20 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.850 15:09:20 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.057 15:09:24 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:15.057 15:09:24 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:15.057 15:09:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.057 15:09:24 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:15.057 15:09:24 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.057 15:09:24 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:18.358 Hugepages 00:04:18.358 node hugesize free / total 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.358 00:04:18.358 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.358 15:09:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.620 15:09:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:04:18.620 15:09:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:18.620 15:09:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.620 15:09:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.620 15:09:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:04:18.620 15:09:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:18.620 15:09:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.620 15:09:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.620 15:09:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:04:18.620 15:09:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:18.620 15:09:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.620 15:09:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.620 15:09:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:04:18.620 15:09:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:18.620 15:09:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.620 15:09:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.620 15:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:04:18.620 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:18.620 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.620 15:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.620 15:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:04:18.620 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:18.621 15:09:28 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:18.621 15:09:28 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.621 15:09:28 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.621 15:09:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:18.621 ************************************ 00:04:18.621 START TEST denied 00:04:18.621 ************************************ 00:04:18.621 15:09:28 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:18.621 15:09:28 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:04:18.621 15:09:28 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:18.621 15:09:28 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.621 15:09:28 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:04:18.621 15:09:28 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.830 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:04:22.830 15:09:31 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:04:22.830 15:09:31 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:22.830 15:09:31 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:22.830 15:09:31 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:04:22.830 15:09:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:04:22.830 15:09:31 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:22.830 15:09:31 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:22.830 15:09:31 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:22.830 15:09:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.830 15:09:31 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.036 00:04:27.036 real 0m8.300s 00:04:27.036 user 0m2.694s 00:04:27.036 sys 0m4.870s 00:04:27.036 15:09:36 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.036 15:09:36 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:27.036 ************************************ 00:04:27.036 END TEST denied 00:04:27.036 ************************************ 00:04:27.036 15:09:36 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:27.036 15:09:36 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:27.036 15:09:36 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.036 15:09:36 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.036 15:09:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:27.036 ************************************ 00:04:27.036 START TEST allowed 00:04:27.036 ************************************ 00:04:27.036 15:09:36 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:27.036 15:09:36 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:04:27.036 15:09:36 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:27.036 15:09:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.036 15:09:36 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:27.036 15:09:36 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:04:32.325 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:32.325 15:09:41 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:32.325 15:09:41 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:32.325 15:09:41 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:32.325 15:09:41 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.325 15:09:41 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.572 00:04:36.572 real 0m9.393s 00:04:36.572 user 0m2.728s 00:04:36.572 sys 0m4.907s 00:04:36.572 15:09:45 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.572 15:09:45 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:36.572 ************************************ 00:04:36.572 END TEST allowed 00:04:36.572 ************************************ 00:04:36.572 15:09:45 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:36.572 00:04:36.572 real 0m25.676s 00:04:36.572 user 0m8.489s 00:04:36.572 sys 0m14.884s 00:04:36.572 15:09:45 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.572 15:09:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:36.572 ************************************ 00:04:36.572 END TEST acl 00:04:36.572 ************************************ 00:04:36.572 15:09:46 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:36.572 15:09:46 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:36.572 15:09:46 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.572 15:09:46 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.572 15:09:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:36.572 ************************************ 00:04:36.572 START TEST hugepages 00:04:36.572 ************************************ 00:04:36.572 15:09:46 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:36.572 * Looking for test storage... 00:04:36.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105574548 kB' 'MemAvailable: 110255380 kB' 'Buffers: 2704 kB' 'Cached: 11509132 kB' 'SwapCached: 0 kB' 'Active: 7468180 kB' 'Inactive: 4661420 kB' 'Active(anon): 7005540 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 621144 kB' 'Mapped: 217172 kB' 'Shmem: 6387776 kB' 'KReclaimable: 597716 kB' 'Slab: 1416912 kB' 'SReclaimable: 597716 kB' 'SUnreclaim: 819196 kB' 'KernelStack: 27776 kB' 'PageTables: 9360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 8571740 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238248 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.572 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.835 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.836 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.836 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.836 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:36.836 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:36.836 15:09:46 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:36.836 15:09:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.836 15:09:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.836 15:09:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:36.836 ************************************ 00:04:36.836 START TEST default_setup 00:04:36.836 ************************************ 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.836 15:09:46 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.140 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:40.140 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:40.140 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:40.140 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:40.140 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:40.140 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:40.140 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:40.140 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:40.140 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:40.140 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:40.140 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:40.140 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:40.140 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:40.140 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:40.140 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:40.140 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:40.404 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107719592 kB' 'MemAvailable: 112400360 kB' 'Buffers: 2704 kB' 'Cached: 11509252 kB' 'SwapCached: 0 kB' 'Active: 7484852 kB' 'Inactive: 4661420 kB' 'Active(anon): 7022212 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637636 kB' 'Mapped: 217424 kB' 'Shmem: 6387896 kB' 'KReclaimable: 597652 kB' 'Slab: 1414436 kB' 'SReclaimable: 597652 kB' 'SUnreclaim: 816784 kB' 'KernelStack: 27632 kB' 'PageTables: 9248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8588916 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238472 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.404 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107723064 kB' 'MemAvailable: 112403832 kB' 'Buffers: 2704 kB' 'Cached: 11509256 kB' 'SwapCached: 0 kB' 'Active: 7484860 kB' 'Inactive: 4661420 kB' 'Active(anon): 7022220 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637688 kB' 'Mapped: 217408 kB' 'Shmem: 6387900 kB' 'KReclaimable: 597652 kB' 'Slab: 1414520 kB' 'SReclaimable: 597652 kB' 'SUnreclaim: 816868 kB' 'KernelStack: 27616 kB' 'PageTables: 9204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8588936 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238440 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.405 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107724160 kB' 'MemAvailable: 112404928 kB' 'Buffers: 2704 kB' 'Cached: 11509256 kB' 'SwapCached: 0 kB' 'Active: 7484508 kB' 'Inactive: 4661420 kB' 'Active(anon): 7021868 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637336 kB' 'Mapped: 217408 kB' 'Shmem: 6387900 kB' 'KReclaimable: 597652 kB' 'Slab: 1414520 kB' 'SReclaimable: 597652 kB' 'SUnreclaim: 816868 kB' 'KernelStack: 27600 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8588956 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238440 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:40.406 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:40.406 nr_hugepages=1024 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.407 resv_hugepages=0 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.407 surplus_hugepages=0 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.407 anon_hugepages=0 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107725096 kB' 'MemAvailable: 112405864 kB' 'Buffers: 2704 kB' 'Cached: 11509296 kB' 'SwapCached: 0 kB' 'Active: 7484896 kB' 'Inactive: 4661420 kB' 'Active(anon): 7022256 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637688 kB' 'Mapped: 217408 kB' 'Shmem: 6387940 kB' 'KReclaimable: 597652 kB' 'Slab: 1414520 kB' 'SReclaimable: 597652 kB' 'SUnreclaim: 816868 kB' 'KernelStack: 27616 kB' 'PageTables: 9204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8588980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238440 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.407 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.670 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.671 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659012 kB' 'MemFree: 59987684 kB' 'MemUsed: 5671328 kB' 'SwapCached: 0 kB' 'Active: 1581128 kB' 'Inactive: 332296 kB' 'Active(anon): 1408532 kB' 'Inactive(anon): 0 kB' 'Active(file): 172596 kB' 'Inactive(file): 332296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1618460 kB' 'Mapped: 118736 kB' 'AnonPages: 298308 kB' 'Shmem: 1113568 kB' 'KernelStack: 14968 kB' 'PageTables: 5776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194040 kB' 'Slab: 572432 kB' 'SReclaimable: 194040 kB' 'SUnreclaim: 378392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.672 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:40.673 node0=1024 expecting 1024 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:40.673 00:04:40.673 real 0m3.823s 00:04:40.673 user 0m1.488s 00:04:40.673 sys 0m2.290s 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.673 15:09:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:40.673 ************************************ 00:04:40.673 END TEST default_setup 00:04:40.673 ************************************ 00:04:40.673 15:09:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:40.673 15:09:50 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:40.673 15:09:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.673 15:09:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.673 15:09:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:40.673 ************************************ 00:04:40.673 START TEST per_node_1G_alloc 00:04:40.673 ************************************ 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.673 15:09:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:43.976 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:43.976 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:43.976 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:43.976 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:43.976 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:43.976 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:43.976 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:43.976 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:43.976 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:43.976 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:43.976 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:43.976 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:43.976 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:43.976 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:43.976 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:43.976 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:43.976 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107735824 kB' 'MemAvailable: 112416568 kB' 'Buffers: 2704 kB' 'Cached: 11509412 kB' 'SwapCached: 0 kB' 'Active: 7481528 kB' 'Inactive: 4661420 kB' 'Active(anon): 7018888 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634116 kB' 'Mapped: 216320 kB' 'Shmem: 6388056 kB' 'KReclaimable: 597628 kB' 'Slab: 1414528 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 816900 kB' 'KernelStack: 27648 kB' 'PageTables: 9212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8580304 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238456 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.243 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.244 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107739072 kB' 'MemAvailable: 112419816 kB' 'Buffers: 2704 kB' 'Cached: 11509416 kB' 'SwapCached: 0 kB' 'Active: 7482140 kB' 'Inactive: 4661420 kB' 'Active(anon): 7019500 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634780 kB' 'Mapped: 216292 kB' 'Shmem: 6388060 kB' 'KReclaimable: 597628 kB' 'Slab: 1414528 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 816900 kB' 'KernelStack: 27584 kB' 'PageTables: 9024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8581696 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238456 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.245 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:44.246 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107739772 kB' 'MemAvailable: 112420516 kB' 'Buffers: 2704 kB' 'Cached: 11509416 kB' 'SwapCached: 0 kB' 'Active: 7481592 kB' 'Inactive: 4661420 kB' 'Active(anon): 7018952 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634216 kB' 'Mapped: 216292 kB' 'Shmem: 6388060 kB' 'KReclaimable: 597628 kB' 'Slab: 1414628 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817000 kB' 'KernelStack: 27616 kB' 'PageTables: 8696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8580100 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238408 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.247 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.248 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.249 nr_hugepages=1024 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.249 resv_hugepages=0 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.249 surplus_hugepages=0 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.249 anon_hugepages=0 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107741300 kB' 'MemAvailable: 112422044 kB' 'Buffers: 2704 kB' 'Cached: 11509456 kB' 'SwapCached: 0 kB' 'Active: 7481384 kB' 'Inactive: 4661420 kB' 'Active(anon): 7018744 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634000 kB' 'Mapped: 216292 kB' 'Shmem: 6388100 kB' 'KReclaimable: 597628 kB' 'Slab: 1414628 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817000 kB' 'KernelStack: 27600 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8580128 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238424 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.249 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.250 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659012 kB' 'MemFree: 61046500 kB' 'MemUsed: 4612512 kB' 'SwapCached: 0 kB' 'Active: 1573232 kB' 'Inactive: 332296 kB' 'Active(anon): 1400636 kB' 'Inactive(anon): 0 kB' 'Active(file): 172596 kB' 'Inactive(file): 332296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1618568 kB' 'Mapped: 117872 kB' 'AnonPages: 290268 kB' 'Shmem: 1113676 kB' 'KernelStack: 15048 kB' 'PageTables: 5560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194008 kB' 'Slab: 572492 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 378484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.251 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 46695524 kB' 'MemUsed: 13984312 kB' 'SwapCached: 0 kB' 'Active: 5908648 kB' 'Inactive: 4329124 kB' 'Active(anon): 5618604 kB' 'Inactive(anon): 0 kB' 'Active(file): 290044 kB' 'Inactive(file): 4329124 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9893616 kB' 'Mapped: 98420 kB' 'AnonPages: 344280 kB' 'Shmem: 5274448 kB' 'KernelStack: 12744 kB' 'PageTables: 3784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 403620 kB' 'Slab: 842136 kB' 'SReclaimable: 403620 kB' 'SUnreclaim: 438516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.252 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:44.253 node0=512 expecting 512 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:44.253 node1=512 expecting 512 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:44.253 00:04:44.253 real 0m3.683s 00:04:44.253 user 0m1.418s 00:04:44.253 sys 0m2.278s 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.253 15:09:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:44.253 ************************************ 00:04:44.253 END TEST per_node_1G_alloc 00:04:44.253 ************************************ 00:04:44.515 15:09:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:44.515 15:09:53 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:44.515 15:09:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.515 15:09:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.515 15:09:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:44.515 ************************************ 00:04:44.515 START TEST even_2G_alloc 00:04:44.515 ************************************ 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.515 15:09:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:47.820 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:47.820 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:47.820 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:47.820 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:47.820 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:47.820 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:47.820 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:47.820 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:47.820 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:47.820 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:47.820 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:47.820 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:47.820 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:47.820 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:47.820 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:47.820 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:47.820 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107765280 kB' 'MemAvailable: 112446024 kB' 'Buffers: 2704 kB' 'Cached: 11509592 kB' 'SwapCached: 0 kB' 'Active: 7482820 kB' 'Inactive: 4661420 kB' 'Active(anon): 7020180 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634636 kB' 'Mapped: 216408 kB' 'Shmem: 6388236 kB' 'KReclaimable: 597628 kB' 'Slab: 1414996 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817368 kB' 'KernelStack: 27616 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8582628 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238600 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.821 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107764136 kB' 'MemAvailable: 112444880 kB' 'Buffers: 2704 kB' 'Cached: 11509596 kB' 'SwapCached: 0 kB' 'Active: 7483224 kB' 'Inactive: 4661420 kB' 'Active(anon): 7020584 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635068 kB' 'Mapped: 216392 kB' 'Shmem: 6388240 kB' 'KReclaimable: 597628 kB' 'Slab: 1414996 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817368 kB' 'KernelStack: 27824 kB' 'PageTables: 9608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8582652 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238552 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.088 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.089 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107763632 kB' 'MemAvailable: 112444376 kB' 'Buffers: 2704 kB' 'Cached: 11509628 kB' 'SwapCached: 0 kB' 'Active: 7483140 kB' 'Inactive: 4661420 kB' 'Active(anon): 7020500 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635392 kB' 'Mapped: 216316 kB' 'Shmem: 6388272 kB' 'KReclaimable: 597628 kB' 'Slab: 1415020 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817392 kB' 'KernelStack: 27792 kB' 'PageTables: 9428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8583172 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238536 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.090 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:48.091 nr_hugepages=1024 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.091 resv_hugepages=0 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.091 surplus_hugepages=0 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.091 anon_hugepages=0 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:48.091 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107766504 kB' 'MemAvailable: 112447248 kB' 'Buffers: 2704 kB' 'Cached: 11509652 kB' 'SwapCached: 0 kB' 'Active: 7482740 kB' 'Inactive: 4661420 kB' 'Active(anon): 7020100 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635000 kB' 'Mapped: 216316 kB' 'Shmem: 6388296 kB' 'KReclaimable: 597628 kB' 'Slab: 1415020 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817392 kB' 'KernelStack: 27680 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8583196 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238584 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.092 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.093 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659012 kB' 'MemFree: 61035412 kB' 'MemUsed: 4623600 kB' 'SwapCached: 0 kB' 'Active: 1573044 kB' 'Inactive: 332296 kB' 'Active(anon): 1400448 kB' 'Inactive(anon): 0 kB' 'Active(file): 172596 kB' 'Inactive(file): 332296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1618684 kB' 'Mapped: 118376 kB' 'AnonPages: 289740 kB' 'Shmem: 1113792 kB' 'KernelStack: 14872 kB' 'PageTables: 5204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194008 kB' 'Slab: 572528 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 378520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.094 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 46734044 kB' 'MemUsed: 13945792 kB' 'SwapCached: 0 kB' 'Active: 5910976 kB' 'Inactive: 4329124 kB' 'Active(anon): 5620932 kB' 'Inactive(anon): 0 kB' 'Active(file): 290044 kB' 'Inactive(file): 4329124 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9893688 kB' 'Mapped: 98452 kB' 'AnonPages: 346512 kB' 'Shmem: 5274520 kB' 'KernelStack: 12760 kB' 'PageTables: 3640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 403620 kB' 'Slab: 842492 kB' 'SReclaimable: 403620 kB' 'SUnreclaim: 438872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.095 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:48.096 node0=512 expecting 512 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:48.096 node1=512 expecting 512 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:48.096 00:04:48.096 real 0m3.678s 00:04:48.096 user 0m1.481s 00:04:48.096 sys 0m2.249s 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.096 15:09:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:48.096 ************************************ 00:04:48.096 END TEST even_2G_alloc 00:04:48.096 ************************************ 00:04:48.096 15:09:57 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:48.096 15:09:57 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:48.096 15:09:57 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.096 15:09:57 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.096 15:09:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:48.096 ************************************ 00:04:48.096 START TEST odd_alloc 00:04:48.096 ************************************ 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:48.096 15:09:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:48.097 15:09:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.097 15:09:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.363 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:52.363 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:52.363 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:52.363 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:52.363 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:52.363 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:52.363 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:52.363 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:52.363 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:52.363 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:52.363 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:52.363 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:52.363 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:52.363 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:52.363 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:52.363 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:52.363 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107800184 kB' 'MemAvailable: 112480928 kB' 'Buffers: 2704 kB' 'Cached: 11509784 kB' 'SwapCached: 0 kB' 'Active: 7484656 kB' 'Inactive: 4661420 kB' 'Active(anon): 7022016 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636868 kB' 'Mapped: 216368 kB' 'Shmem: 6388428 kB' 'KReclaimable: 597628 kB' 'Slab: 1415224 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817596 kB' 'KernelStack: 27568 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8581116 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238440 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.363 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.364 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107800660 kB' 'MemAvailable: 112481404 kB' 'Buffers: 2704 kB' 'Cached: 11509788 kB' 'SwapCached: 0 kB' 'Active: 7484232 kB' 'Inactive: 4661420 kB' 'Active(anon): 7021592 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636504 kB' 'Mapped: 216336 kB' 'Shmem: 6388432 kB' 'KReclaimable: 597628 kB' 'Slab: 1415232 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817604 kB' 'KernelStack: 27568 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8581136 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238408 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.365 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.366 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107801228 kB' 'MemAvailable: 112481972 kB' 'Buffers: 2704 kB' 'Cached: 11509788 kB' 'SwapCached: 0 kB' 'Active: 7484232 kB' 'Inactive: 4661420 kB' 'Active(anon): 7021592 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636504 kB' 'Mapped: 216336 kB' 'Shmem: 6388432 kB' 'KReclaimable: 597628 kB' 'Slab: 1415232 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817604 kB' 'KernelStack: 27568 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8581156 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238408 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.367 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.368 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:52.369 nr_hugepages=1025 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.369 resv_hugepages=0 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.369 surplus_hugepages=0 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.369 anon_hugepages=0 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107801156 kB' 'MemAvailable: 112481900 kB' 'Buffers: 2704 kB' 'Cached: 11509824 kB' 'SwapCached: 0 kB' 'Active: 7484280 kB' 'Inactive: 4661420 kB' 'Active(anon): 7021640 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636508 kB' 'Mapped: 216336 kB' 'Shmem: 6388468 kB' 'KReclaimable: 597628 kB' 'Slab: 1415232 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817604 kB' 'KernelStack: 27568 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8581176 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238408 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.369 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:52.370 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659012 kB' 'MemFree: 61048484 kB' 'MemUsed: 4610528 kB' 'SwapCached: 0 kB' 'Active: 1572904 kB' 'Inactive: 332296 kB' 'Active(anon): 1400308 kB' 'Inactive(anon): 0 kB' 'Active(file): 172596 kB' 'Inactive(file): 332296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1618840 kB' 'Mapped: 117872 kB' 'AnonPages: 289604 kB' 'Shmem: 1113948 kB' 'KernelStack: 14936 kB' 'PageTables: 5592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194008 kB' 'Slab: 572860 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 378852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.371 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 46753060 kB' 'MemUsed: 13926776 kB' 'SwapCached: 0 kB' 'Active: 5911056 kB' 'Inactive: 4329124 kB' 'Active(anon): 5621012 kB' 'Inactive(anon): 0 kB' 'Active(file): 290044 kB' 'Inactive(file): 4329124 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9893728 kB' 'Mapped: 98464 kB' 'AnonPages: 346520 kB' 'Shmem: 5274560 kB' 'KernelStack: 12616 kB' 'PageTables: 3332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 403620 kB' 'Slab: 842372 kB' 'SReclaimable: 403620 kB' 'SUnreclaim: 438752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.372 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.373 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:52.374 node0=512 expecting 513 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:52.374 node1=513 expecting 512 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:52.374 00:04:52.374 real 0m3.774s 00:04:52.374 user 0m1.504s 00:04:52.374 sys 0m2.318s 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.374 15:10:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:52.374 ************************************ 00:04:52.374 END TEST odd_alloc 00:04:52.374 ************************************ 00:04:52.374 15:10:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:52.374 15:10:01 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:52.374 15:10:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.374 15:10:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.374 15:10:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.374 ************************************ 00:04:52.374 START TEST custom_alloc 00:04:52.374 ************************************ 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.374 15:10:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.721 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:55.721 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:55.721 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:55.721 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:55.721 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:55.721 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:55.721 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:55.721 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:55.721 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:55.721 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:55.721 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:55.721 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:55.721 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:55.721 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:55.721 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:55.721 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:55.721 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106743708 kB' 'MemAvailable: 111424452 kB' 'Buffers: 2704 kB' 'Cached: 11509964 kB' 'SwapCached: 0 kB' 'Active: 7486508 kB' 'Inactive: 4661420 kB' 'Active(anon): 7023868 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637932 kB' 'Mapped: 216324 kB' 'Shmem: 6388608 kB' 'KReclaimable: 597628 kB' 'Slab: 1415148 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817520 kB' 'KernelStack: 27600 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8581940 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238520 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.721 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.722 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106745764 kB' 'MemAvailable: 111426508 kB' 'Buffers: 2704 kB' 'Cached: 11509968 kB' 'SwapCached: 0 kB' 'Active: 7486796 kB' 'Inactive: 4661420 kB' 'Active(anon): 7024156 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638740 kB' 'Mapped: 216796 kB' 'Shmem: 6388612 kB' 'KReclaimable: 597628 kB' 'Slab: 1415108 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817480 kB' 'KernelStack: 27584 kB' 'PageTables: 9060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8583712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238456 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.723 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.724 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106745936 kB' 'MemAvailable: 111426680 kB' 'Buffers: 2704 kB' 'Cached: 11509984 kB' 'SwapCached: 0 kB' 'Active: 7491328 kB' 'Inactive: 4661420 kB' 'Active(anon): 7028688 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643276 kB' 'Mapped: 217056 kB' 'Shmem: 6388628 kB' 'KReclaimable: 597628 kB' 'Slab: 1415148 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817520 kB' 'KernelStack: 27600 kB' 'PageTables: 9144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8588216 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238444 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.725 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.726 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.727 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:55.728 nr_hugepages=1536 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.728 resv_hugepages=0 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.728 surplus_hugepages=0 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.728 anon_hugepages=0 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106746980 kB' 'MemAvailable: 111427724 kB' 'Buffers: 2704 kB' 'Cached: 11510008 kB' 'SwapCached: 0 kB' 'Active: 7491560 kB' 'Inactive: 4661420 kB' 'Active(anon): 7028920 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643520 kB' 'Mapped: 217056 kB' 'Shmem: 6388652 kB' 'KReclaimable: 597628 kB' 'Slab: 1415140 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817512 kB' 'KernelStack: 27616 kB' 'PageTables: 9184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8588120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238380 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.728 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.729 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659012 kB' 'MemFree: 61039876 kB' 'MemUsed: 4619136 kB' 'SwapCached: 0 kB' 'Active: 1573112 kB' 'Inactive: 332296 kB' 'Active(anon): 1400516 kB' 'Inactive(anon): 0 kB' 'Active(file): 172596 kB' 'Inactive(file): 332296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1618940 kB' 'Mapped: 117872 kB' 'AnonPages: 289656 kB' 'Shmem: 1114048 kB' 'KernelStack: 14904 kB' 'PageTables: 5544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194008 kB' 'Slab: 572848 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 378840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.730 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.731 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 45712104 kB' 'MemUsed: 14967732 kB' 'SwapCached: 0 kB' 'Active: 5912156 kB' 'Inactive: 4329124 kB' 'Active(anon): 5622112 kB' 'Inactive(anon): 0 kB' 'Active(file): 290044 kB' 'Inactive(file): 4329124 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9893812 kB' 'Mapped: 98500 kB' 'AnonPages: 347516 kB' 'Shmem: 5274644 kB' 'KernelStack: 12648 kB' 'PageTables: 3392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 403620 kB' 'Slab: 842292 kB' 'SReclaimable: 403620 kB' 'SUnreclaim: 438672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.732 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.733 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:55.734 node0=512 expecting 512 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:55.734 node1=1024 expecting 1024 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:55.734 00:04:55.734 real 0m3.723s 00:04:55.734 user 0m1.499s 00:04:55.734 sys 0m2.276s 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.734 15:10:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:55.734 ************************************ 00:04:55.734 END TEST custom_alloc 00:04:55.734 ************************************ 00:04:55.734 15:10:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:55.734 15:10:05 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:55.734 15:10:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.734 15:10:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.734 15:10:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:55.734 ************************************ 00:04:55.734 START TEST no_shrink_alloc 00:04:55.734 ************************************ 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.734 15:10:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.947 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:59.947 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:59.947 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:59.947 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:59.947 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:59.947 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:59.947 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:59.947 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:59.947 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:59.947 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:59.947 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:59.947 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:59.947 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:59.947 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:59.947 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:59.947 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:59.947 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107757864 kB' 'MemAvailable: 112438608 kB' 'Buffers: 2704 kB' 'Cached: 11510140 kB' 'SwapCached: 0 kB' 'Active: 7485928 kB' 'Inactive: 4661420 kB' 'Active(anon): 7023288 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637872 kB' 'Mapped: 216416 kB' 'Shmem: 6388784 kB' 'KReclaimable: 597628 kB' 'Slab: 1415496 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817868 kB' 'KernelStack: 27600 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8583120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238312 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.947 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.948 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.949 15:10:08 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107757280 kB' 'MemAvailable: 112438024 kB' 'Buffers: 2704 kB' 'Cached: 11510144 kB' 'SwapCached: 0 kB' 'Active: 7486040 kB' 'Inactive: 4661420 kB' 'Active(anon): 7023400 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638000 kB' 'Mapped: 216376 kB' 'Shmem: 6388788 kB' 'KReclaimable: 597628 kB' 'Slab: 1415488 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817860 kB' 'KernelStack: 27568 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8583136 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238280 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.949 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.950 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107759892 kB' 'MemAvailable: 112440636 kB' 'Buffers: 2704 kB' 'Cached: 11510160 kB' 'SwapCached: 0 kB' 'Active: 7486316 kB' 'Inactive: 4661420 kB' 'Active(anon): 7023676 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638288 kB' 'Mapped: 216376 kB' 'Shmem: 6388804 kB' 'KReclaimable: 597628 kB' 'Slab: 1415592 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817964 kB' 'KernelStack: 27568 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8583160 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238296 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.951 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.952 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:59.953 nr_hugepages=1024 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.953 resv_hugepages=0 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.953 surplus_hugepages=0 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.953 anon_hugepages=0 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107760480 kB' 'MemAvailable: 112441224 kB' 'Buffers: 2704 kB' 'Cached: 11510180 kB' 'SwapCached: 0 kB' 'Active: 7486452 kB' 'Inactive: 4661420 kB' 'Active(anon): 7023812 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638412 kB' 'Mapped: 216376 kB' 'Shmem: 6388824 kB' 'KReclaimable: 597628 kB' 'Slab: 1415592 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817964 kB' 'KernelStack: 27568 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8584424 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238280 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.953 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:59.954 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659012 kB' 'MemFree: 60013592 kB' 'MemUsed: 5645420 kB' 'SwapCached: 0 kB' 'Active: 1576260 kB' 'Inactive: 332296 kB' 'Active(anon): 1403664 kB' 'Inactive(anon): 0 kB' 'Active(file): 172596 kB' 'Inactive(file): 332296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1618984 kB' 'Mapped: 117872 kB' 'AnonPages: 293440 kB' 'Shmem: 1114092 kB' 'KernelStack: 14920 kB' 'PageTables: 5664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194008 kB' 'Slab: 573004 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 378996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.955 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:59.956 node0=1024 expecting 1024 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.956 15:10:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:03.255 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:03.255 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:03.255 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:03.255 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:03.255 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:03.255 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:03.256 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:03.256 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:03.256 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:03.256 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:03.256 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:03.256 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:03.256 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:03.256 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:03.256 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:03.256 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:03.256 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:03.256 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107793676 kB' 'MemAvailable: 112474420 kB' 'Buffers: 2704 kB' 'Cached: 11510296 kB' 'SwapCached: 0 kB' 'Active: 7489860 kB' 'Inactive: 4661420 kB' 'Active(anon): 7027220 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641164 kB' 'Mapped: 216520 kB' 'Shmem: 6388940 kB' 'KReclaimable: 597628 kB' 'Slab: 1415412 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817784 kB' 'KernelStack: 27808 kB' 'PageTables: 9796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8587432 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238504 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.256 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107794636 kB' 'MemAvailable: 112475380 kB' 'Buffers: 2704 kB' 'Cached: 11510296 kB' 'SwapCached: 0 kB' 'Active: 7489096 kB' 'Inactive: 4661420 kB' 'Active(anon): 7026456 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640332 kB' 'Mapped: 216476 kB' 'Shmem: 6388940 kB' 'KReclaimable: 597628 kB' 'Slab: 1415388 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817760 kB' 'KernelStack: 27696 kB' 'PageTables: 9248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8585716 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238472 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.257 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.258 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107792612 kB' 'MemAvailable: 112473356 kB' 'Buffers: 2704 kB' 'Cached: 11510316 kB' 'SwapCached: 0 kB' 'Active: 7488364 kB' 'Inactive: 4661420 kB' 'Active(anon): 7025724 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640020 kB' 'Mapped: 216400 kB' 'Shmem: 6388960 kB' 'KReclaimable: 597628 kB' 'Slab: 1415388 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817760 kB' 'KernelStack: 27632 kB' 'PageTables: 9376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8587472 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238472 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.259 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.260 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:03.261 nr_hugepages=1024 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:03.261 resv_hugepages=0 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:03.261 surplus_hugepages=0 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:03.261 anon_hugepages=0 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107792896 kB' 'MemAvailable: 112473640 kB' 'Buffers: 2704 kB' 'Cached: 11510316 kB' 'SwapCached: 0 kB' 'Active: 7488064 kB' 'Inactive: 4661420 kB' 'Active(anon): 7025424 kB' 'Inactive(anon): 0 kB' 'Active(file): 462640 kB' 'Inactive(file): 4661420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639712 kB' 'Mapped: 216400 kB' 'Shmem: 6388960 kB' 'KReclaimable: 597628 kB' 'Slab: 1415388 kB' 'SReclaimable: 597628 kB' 'SUnreclaim: 817760 kB' 'KernelStack: 27680 kB' 'PageTables: 9300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8587492 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238552 kB' 'VmallocChunk: 0 kB' 'Percpu: 190656 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3810680 kB' 'DirectMap2M: 28375040 kB' 'DirectMap1G: 103809024 kB' 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.261 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.523 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.523 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.523 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.523 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.523 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.523 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.523 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.523 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.523 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.523 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.523 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.523 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.524 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.525 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659012 kB' 'MemFree: 60026924 kB' 'MemUsed: 5632088 kB' 'SwapCached: 0 kB' 'Active: 1577124 kB' 'Inactive: 332296 kB' 'Active(anon): 1404528 kB' 'Inactive(anon): 0 kB' 'Active(file): 172596 kB' 'Inactive(file): 332296 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1619032 kB' 'Mapped: 117872 kB' 'AnonPages: 293548 kB' 'Shmem: 1114140 kB' 'KernelStack: 14952 kB' 'PageTables: 5448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194008 kB' 'Slab: 572908 kB' 'SReclaimable: 194008 kB' 'SUnreclaim: 378900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.526 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:03.527 node0=1024 expecting 1024 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:03.527 00:05:03.527 real 0m7.607s 00:05:03.527 user 0m2.900s 00:05:03.527 sys 0m4.802s 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.527 15:10:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:03.527 ************************************ 00:05:03.527 END TEST no_shrink_alloc 00:05:03.527 ************************************ 00:05:03.527 15:10:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:03.527 15:10:12 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:03.527 15:10:12 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:03.527 15:10:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:03.527 15:10:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.527 15:10:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.527 15:10:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.527 15:10:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.527 15:10:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:03.527 15:10:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.527 15:10:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.527 15:10:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:03.527 15:10:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:03.527 15:10:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:03.527 15:10:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:03.527 00:05:03.527 real 0m26.922s 00:05:03.527 user 0m10.521s 00:05:03.527 sys 0m16.650s 00:05:03.527 15:10:12 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.527 15:10:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:03.527 ************************************ 00:05:03.527 END TEST hugepages 00:05:03.527 ************************************ 00:05:03.527 15:10:13 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:03.527 15:10:13 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:03.527 15:10:13 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.527 15:10:13 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.527 15:10:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:03.527 ************************************ 00:05:03.527 START TEST driver 00:05:03.527 ************************************ 00:05:03.527 15:10:13 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:03.788 * Looking for test storage... 00:05:03.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:03.788 15:10:13 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:03.788 15:10:13 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:03.789 15:10:13 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:09.074 15:10:18 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:09.074 15:10:18 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.074 15:10:18 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.074 15:10:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:09.074 ************************************ 00:05:09.074 START TEST guess_driver 00:05:09.074 ************************************ 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 351 > 0 )) 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:09.074 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:09.074 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:09.074 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:09.074 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:09.074 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:09.074 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:09.074 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:09.074 Looking for driver=vfio-pci 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:09.074 15:10:18 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.380 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.381 15:10:21 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:17.667 00:05:17.667 real 0m8.712s 00:05:17.667 user 0m2.797s 00:05:17.667 sys 0m5.172s 00:05:17.667 15:10:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.667 15:10:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:17.667 ************************************ 00:05:17.667 END TEST guess_driver 00:05:17.667 ************************************ 00:05:17.667 15:10:26 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:17.667 00:05:17.667 real 0m13.742s 00:05:17.668 user 0m4.318s 00:05:17.668 sys 0m7.937s 00:05:17.668 15:10:26 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.668 15:10:26 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:17.668 ************************************ 00:05:17.668 END TEST driver 00:05:17.668 ************************************ 00:05:17.668 15:10:26 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:17.668 15:10:26 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:17.668 15:10:26 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.668 15:10:26 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.668 15:10:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:17.668 ************************************ 00:05:17.668 START TEST devices 00:05:17.668 ************************************ 00:05:17.668 15:10:26 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:17.668 * Looking for test storage... 00:05:17.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:17.668 15:10:26 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:17.668 15:10:26 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:17.668 15:10:26 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.668 15:10:26 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:21.895 15:10:30 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:21.895 15:10:30 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:21.895 15:10:30 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:21.895 15:10:30 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:21.895 15:10:30 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:21.895 15:10:30 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:21.895 15:10:30 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:21.895 15:10:30 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:21.895 15:10:30 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:21.895 15:10:30 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:21.895 15:10:30 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:21.895 15:10:30 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:21.895 15:10:30 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:21.895 15:10:30 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:21.895 15:10:30 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:21.895 15:10:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:21.895 15:10:30 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:21.895 15:10:30 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:05:21.895 15:10:30 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:05:21.895 15:10:30 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:21.895 15:10:30 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:21.895 15:10:30 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:21.895 No valid GPT data, bailing 00:05:21.895 15:10:31 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:21.895 15:10:31 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:21.895 15:10:31 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:21.895 15:10:31 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:21.895 15:10:31 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:21.895 15:10:31 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:21.895 15:10:31 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:05:21.895 15:10:31 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:05:21.895 15:10:31 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:21.895 15:10:31 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:05:21.895 15:10:31 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:21.895 15:10:31 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:21.895 15:10:31 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:21.895 15:10:31 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.895 15:10:31 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.895 15:10:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:21.895 ************************************ 00:05:21.895 START TEST nvme_mount 00:05:21.895 ************************************ 00:05:21.895 15:10:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:21.895 15:10:31 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:21.895 15:10:31 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:21.896 15:10:31 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:22.836 Creating new GPT entries in memory. 00:05:22.836 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:22.836 other utilities. 00:05:22.836 15:10:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:22.836 15:10:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:22.836 15:10:32 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:22.836 15:10:32 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:22.836 15:10:32 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:23.782 Creating new GPT entries in memory. 00:05:23.782 The operation has completed successfully. 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 458232 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.782 15:10:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.127 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.389 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:27.389 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:27.389 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.389 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:27.389 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:27.389 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:27.389 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.389 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.389 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:27.389 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:27.389 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:27.389 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:27.389 15:10:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:27.649 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:27.649 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:27.649 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:27.649 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.649 15:10:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:30.952 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.952 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.952 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:31.214 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.215 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:31.215 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:31.215 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.215 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:05:31.215 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:31.215 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:31.215 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:31.215 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:31.215 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:31.215 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:31.215 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:31.215 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.215 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:31.215 15:10:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:31.215 15:10:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.215 15:10:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:35.421 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:35.421 00:05:35.421 real 0m13.351s 00:05:35.421 user 0m4.035s 00:05:35.421 sys 0m7.185s 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.421 15:10:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:35.421 ************************************ 00:05:35.421 END TEST nvme_mount 00:05:35.421 ************************************ 00:05:35.421 15:10:44 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:35.421 15:10:44 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:35.421 15:10:44 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.421 15:10:44 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.421 15:10:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:35.421 ************************************ 00:05:35.421 START TEST dm_mount 00:05:35.421 ************************************ 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:35.421 15:10:44 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:35.994 Creating new GPT entries in memory. 00:05:35.994 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:35.994 other utilities. 00:05:35.994 15:10:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:35.994 15:10:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:35.994 15:10:45 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:35.994 15:10:45 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:35.994 15:10:45 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:36.935 Creating new GPT entries in memory. 00:05:36.935 The operation has completed successfully. 00:05:36.935 15:10:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:36.935 15:10:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:36.935 15:10:46 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:36.935 15:10:46 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:36.935 15:10:46 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:38.319 The operation has completed successfully. 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 463638 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:38.319 15:10:47 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.320 15:10:47 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:41.628 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.628 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.628 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.628 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.628 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.628 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.628 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.628 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.628 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.628 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.628 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.628 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.629 15:10:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:44.984 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:44.984 00:05:44.984 real 0m10.007s 00:05:44.984 user 0m2.451s 00:05:44.984 sys 0m4.539s 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.984 15:10:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:44.984 ************************************ 00:05:44.984 END TEST dm_mount 00:05:44.984 ************************************ 00:05:44.984 15:10:54 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:44.984 15:10:54 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:44.984 15:10:54 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:44.984 15:10:54 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:44.984 15:10:54 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:44.984 15:10:54 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:44.984 15:10:54 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:44.984 15:10:54 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:45.244 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:45.244 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:45.244 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:45.244 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:45.244 15:10:54 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:45.244 15:10:54 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:45.244 15:10:54 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:45.244 15:10:54 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:45.244 15:10:54 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:45.244 15:10:54 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:45.244 15:10:54 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:45.244 00:05:45.244 real 0m27.963s 00:05:45.244 user 0m8.123s 00:05:45.244 sys 0m14.563s 00:05:45.244 15:10:54 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.244 15:10:54 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:45.244 ************************************ 00:05:45.244 END TEST devices 00:05:45.244 ************************************ 00:05:45.503 15:10:54 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:45.503 00:05:45.503 real 1m34.680s 00:05:45.503 user 0m31.596s 00:05:45.503 sys 0m54.286s 00:05:45.503 15:10:54 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.503 15:10:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:45.503 ************************************ 00:05:45.503 END TEST setup.sh 00:05:45.503 ************************************ 00:05:45.503 15:10:54 -- common/autotest_common.sh@1142 -- # return 0 00:05:45.503 15:10:54 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:48.797 Hugepages 00:05:48.797 node hugesize free / total 00:05:49.056 node0 1048576kB 0 / 0 00:05:49.056 node0 2048kB 2048 / 2048 00:05:49.056 node1 1048576kB 0 / 0 00:05:49.056 node1 2048kB 0 / 0 00:05:49.056 00:05:49.056 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:49.056 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:49.056 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:49.056 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:49.056 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:49.056 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:49.056 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:49.056 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:49.056 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:49.056 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:49.056 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:49.056 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:49.056 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:49.056 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:49.056 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:49.056 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:49.056 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:49.056 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:49.056 15:10:58 -- spdk/autotest.sh@130 -- # uname -s 00:05:49.056 15:10:58 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:49.056 15:10:58 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:49.056 15:10:58 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:53.258 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:53.258 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:53.258 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:53.258 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:53.258 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:53.258 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:53.258 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:53.258 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:53.258 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:53.258 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:53.258 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:53.258 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:53.258 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:53.258 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:53.258 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:53.258 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:54.664 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:54.664 15:11:04 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:55.605 15:11:05 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:55.605 15:11:05 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:55.605 15:11:05 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:55.605 15:11:05 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:55.605 15:11:05 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:55.605 15:11:05 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:55.605 15:11:05 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:55.605 15:11:05 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:55.606 15:11:05 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:55.866 15:11:05 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:55.866 15:11:05 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:55.866 15:11:05 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:59.166 Waiting for block devices as requested 00:05:59.166 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:59.427 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:59.427 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:59.427 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:59.427 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:59.689 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:59.689 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:59.689 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:59.950 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:59.951 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:00.211 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:00.211 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:00.211 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:00.211 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:00.504 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:00.504 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:00.504 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:00.504 15:11:10 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:00.504 15:11:10 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:06:00.504 15:11:10 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:00.504 15:11:10 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:06:00.504 15:11:10 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:00.504 15:11:10 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:06:00.504 15:11:10 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:00.504 15:11:10 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:00.504 15:11:10 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:00.504 15:11:10 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:00.504 15:11:10 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:00.504 15:11:10 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:00.504 15:11:10 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:00.504 15:11:10 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:06:00.504 15:11:10 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:00.504 15:11:10 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:00.504 15:11:10 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:00.504 15:11:10 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:00.504 15:11:10 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:00.504 15:11:10 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:00.504 15:11:10 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:00.504 15:11:10 -- common/autotest_common.sh@1557 -- # continue 00:06:00.504 15:11:10 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:00.504 15:11:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.504 15:11:10 -- common/autotest_common.sh@10 -- # set +x 00:06:00.504 15:11:10 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:00.504 15:11:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.504 15:11:10 -- common/autotest_common.sh@10 -- # set +x 00:06:00.766 15:11:10 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:04.069 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:04.069 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:04.069 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:04.069 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:04.069 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:04.069 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:04.069 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:04.069 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:04.069 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:04.069 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:04.069 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:04.069 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:04.069 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:04.069 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:04.069 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:04.069 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:04.330 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:04.330 15:11:13 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:04.330 15:11:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.330 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:06:04.330 15:11:13 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:04.330 15:11:13 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:04.330 15:11:13 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:04.330 15:11:13 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:04.330 15:11:13 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:04.330 15:11:13 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:04.330 15:11:13 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:04.330 15:11:13 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:04.330 15:11:13 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:04.330 15:11:13 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:04.330 15:11:13 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:04.590 15:11:13 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:04.590 15:11:13 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:06:04.590 15:11:13 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:04.590 15:11:13 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:04.590 15:11:13 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:06:04.590 15:11:13 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:04.590 15:11:13 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:04.590 15:11:13 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:04.590 15:11:13 -- common/autotest_common.sh@1593 -- # return 0 00:06:04.590 15:11:13 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:04.590 15:11:13 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:04.590 15:11:13 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:04.590 15:11:13 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:04.590 15:11:13 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:04.590 15:11:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:04.590 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:06:04.590 15:11:13 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:04.590 15:11:13 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:04.590 15:11:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.590 15:11:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.591 15:11:13 -- common/autotest_common.sh@10 -- # set +x 00:06:04.591 ************************************ 00:06:04.591 START TEST env 00:06:04.591 ************************************ 00:06:04.591 15:11:14 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:04.591 * Looking for test storage... 00:06:04.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:04.591 15:11:14 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:04.591 15:11:14 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.591 15:11:14 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.591 15:11:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.591 ************************************ 00:06:04.591 START TEST env_memory 00:06:04.591 ************************************ 00:06:04.591 15:11:14 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:04.591 00:06:04.591 00:06:04.591 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.591 http://cunit.sourceforge.net/ 00:06:04.591 00:06:04.591 00:06:04.591 Suite: memory 00:06:04.591 Test: alloc and free memory map ...[2024-07-15 15:11:14.207828] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:04.852 passed 00:06:04.852 Test: mem map translation ...[2024-07-15 15:11:14.235038] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:04.852 [2024-07-15 15:11:14.235072] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:04.852 [2024-07-15 15:11:14.235120] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:04.852 [2024-07-15 15:11:14.235126] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:04.852 passed 00:06:04.852 Test: mem map registration ...[2024-07-15 15:11:14.292757] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:04.852 [2024-07-15 15:11:14.292778] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:04.852 passed 00:06:04.852 Test: mem map adjacent registrations ...passed 00:06:04.852 00:06:04.852 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.852 suites 1 1 n/a 0 0 00:06:04.852 tests 4 4 4 0 0 00:06:04.852 asserts 152 152 152 0 n/a 00:06:04.852 00:06:04.852 Elapsed time = 0.202 seconds 00:06:04.852 00:06:04.852 real 0m0.216s 00:06:04.852 user 0m0.204s 00:06:04.852 sys 0m0.011s 00:06:04.852 15:11:14 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.852 15:11:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:04.852 ************************************ 00:06:04.852 END TEST env_memory 00:06:04.852 ************************************ 00:06:04.852 15:11:14 env -- common/autotest_common.sh@1142 -- # return 0 00:06:04.852 15:11:14 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:04.852 15:11:14 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.852 15:11:14 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.852 15:11:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.852 ************************************ 00:06:04.852 START TEST env_vtophys 00:06:04.852 ************************************ 00:06:04.852 15:11:14 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:04.852 EAL: lib.eal log level changed from notice to debug 00:06:04.852 EAL: Detected lcore 0 as core 0 on socket 0 00:06:04.852 EAL: Detected lcore 1 as core 1 on socket 0 00:06:04.852 EAL: Detected lcore 2 as core 2 on socket 0 00:06:04.852 EAL: Detected lcore 3 as core 3 on socket 0 00:06:04.852 EAL: Detected lcore 4 as core 4 on socket 0 00:06:04.852 EAL: Detected lcore 5 as core 5 on socket 0 00:06:04.852 EAL: Detected lcore 6 as core 6 on socket 0 00:06:04.852 EAL: Detected lcore 7 as core 7 on socket 0 00:06:04.852 EAL: Detected lcore 8 as core 8 on socket 0 00:06:04.852 EAL: Detected lcore 9 as core 9 on socket 0 00:06:04.852 EAL: Detected lcore 10 as core 10 on socket 0 00:06:04.852 EAL: Detected lcore 11 as core 11 on socket 0 00:06:04.852 EAL: Detected lcore 12 as core 12 on socket 0 00:06:04.852 EAL: Detected lcore 13 as core 13 on socket 0 00:06:04.852 EAL: Detected lcore 14 as core 14 on socket 0 00:06:04.852 EAL: Detected lcore 15 as core 15 on socket 0 00:06:04.852 EAL: Detected lcore 16 as core 16 on socket 0 00:06:04.852 EAL: Detected lcore 17 as core 17 on socket 0 00:06:04.852 EAL: Detected lcore 18 as core 18 on socket 0 00:06:04.852 EAL: Detected lcore 19 as core 19 on socket 0 00:06:04.852 EAL: Detected lcore 20 as core 20 on socket 0 00:06:04.852 EAL: Detected lcore 21 as core 21 on socket 0 00:06:04.852 EAL: Detected lcore 22 as core 22 on socket 0 00:06:04.852 EAL: Detected lcore 23 as core 23 on socket 0 00:06:04.852 EAL: Detected lcore 24 as core 24 on socket 0 00:06:04.852 EAL: Detected lcore 25 as core 25 on socket 0 00:06:04.852 EAL: Detected lcore 26 as core 26 on socket 0 00:06:04.852 EAL: Detected lcore 27 as core 27 on socket 0 00:06:04.852 EAL: Detected lcore 28 as core 28 on socket 0 00:06:04.852 EAL: Detected lcore 29 as core 29 on socket 0 00:06:04.852 EAL: Detected lcore 30 as core 30 on socket 0 00:06:04.852 EAL: Detected lcore 31 as core 31 on socket 0 00:06:04.852 EAL: Detected lcore 32 as core 32 on socket 0 00:06:04.852 EAL: Detected lcore 33 as core 33 on socket 0 00:06:04.852 EAL: Detected lcore 34 as core 34 on socket 0 00:06:04.852 EAL: Detected lcore 35 as core 35 on socket 0 00:06:04.852 EAL: Detected lcore 36 as core 0 on socket 1 00:06:04.852 EAL: Detected lcore 37 as core 1 on socket 1 00:06:04.852 EAL: Detected lcore 38 as core 2 on socket 1 00:06:04.852 EAL: Detected lcore 39 as core 3 on socket 1 00:06:04.852 EAL: Detected lcore 40 as core 4 on socket 1 00:06:04.852 EAL: Detected lcore 41 as core 5 on socket 1 00:06:04.852 EAL: Detected lcore 42 as core 6 on socket 1 00:06:04.852 EAL: Detected lcore 43 as core 7 on socket 1 00:06:04.852 EAL: Detected lcore 44 as core 8 on socket 1 00:06:05.117 EAL: Detected lcore 45 as core 9 on socket 1 00:06:05.117 EAL: Detected lcore 46 as core 10 on socket 1 00:06:05.117 EAL: Detected lcore 47 as core 11 on socket 1 00:06:05.117 EAL: Detected lcore 48 as core 12 on socket 1 00:06:05.117 EAL: Detected lcore 49 as core 13 on socket 1 00:06:05.117 EAL: Detected lcore 50 as core 14 on socket 1 00:06:05.117 EAL: Detected lcore 51 as core 15 on socket 1 00:06:05.117 EAL: Detected lcore 52 as core 16 on socket 1 00:06:05.117 EAL: Detected lcore 53 as core 17 on socket 1 00:06:05.117 EAL: Detected lcore 54 as core 18 on socket 1 00:06:05.117 EAL: Detected lcore 55 as core 19 on socket 1 00:06:05.117 EAL: Detected lcore 56 as core 20 on socket 1 00:06:05.117 EAL: Detected lcore 57 as core 21 on socket 1 00:06:05.117 EAL: Detected lcore 58 as core 22 on socket 1 00:06:05.117 EAL: Detected lcore 59 as core 23 on socket 1 00:06:05.117 EAL: Detected lcore 60 as core 24 on socket 1 00:06:05.117 EAL: Detected lcore 61 as core 25 on socket 1 00:06:05.117 EAL: Detected lcore 62 as core 26 on socket 1 00:06:05.117 EAL: Detected lcore 63 as core 27 on socket 1 00:06:05.117 EAL: Detected lcore 64 as core 28 on socket 1 00:06:05.117 EAL: Detected lcore 65 as core 29 on socket 1 00:06:05.117 EAL: Detected lcore 66 as core 30 on socket 1 00:06:05.117 EAL: Detected lcore 67 as core 31 on socket 1 00:06:05.117 EAL: Detected lcore 68 as core 32 on socket 1 00:06:05.117 EAL: Detected lcore 69 as core 33 on socket 1 00:06:05.117 EAL: Detected lcore 70 as core 34 on socket 1 00:06:05.117 EAL: Detected lcore 71 as core 35 on socket 1 00:06:05.117 EAL: Detected lcore 72 as core 0 on socket 0 00:06:05.117 EAL: Detected lcore 73 as core 1 on socket 0 00:06:05.117 EAL: Detected lcore 74 as core 2 on socket 0 00:06:05.117 EAL: Detected lcore 75 as core 3 on socket 0 00:06:05.117 EAL: Detected lcore 76 as core 4 on socket 0 00:06:05.117 EAL: Detected lcore 77 as core 5 on socket 0 00:06:05.117 EAL: Detected lcore 78 as core 6 on socket 0 00:06:05.117 EAL: Detected lcore 79 as core 7 on socket 0 00:06:05.117 EAL: Detected lcore 80 as core 8 on socket 0 00:06:05.117 EAL: Detected lcore 81 as core 9 on socket 0 00:06:05.117 EAL: Detected lcore 82 as core 10 on socket 0 00:06:05.117 EAL: Detected lcore 83 as core 11 on socket 0 00:06:05.117 EAL: Detected lcore 84 as core 12 on socket 0 00:06:05.117 EAL: Detected lcore 85 as core 13 on socket 0 00:06:05.117 EAL: Detected lcore 86 as core 14 on socket 0 00:06:05.117 EAL: Detected lcore 87 as core 15 on socket 0 00:06:05.117 EAL: Detected lcore 88 as core 16 on socket 0 00:06:05.117 EAL: Detected lcore 89 as core 17 on socket 0 00:06:05.117 EAL: Detected lcore 90 as core 18 on socket 0 00:06:05.117 EAL: Detected lcore 91 as core 19 on socket 0 00:06:05.117 EAL: Detected lcore 92 as core 20 on socket 0 00:06:05.117 EAL: Detected lcore 93 as core 21 on socket 0 00:06:05.117 EAL: Detected lcore 94 as core 22 on socket 0 00:06:05.117 EAL: Detected lcore 95 as core 23 on socket 0 00:06:05.117 EAL: Detected lcore 96 as core 24 on socket 0 00:06:05.117 EAL: Detected lcore 97 as core 25 on socket 0 00:06:05.117 EAL: Detected lcore 98 as core 26 on socket 0 00:06:05.117 EAL: Detected lcore 99 as core 27 on socket 0 00:06:05.117 EAL: Detected lcore 100 as core 28 on socket 0 00:06:05.117 EAL: Detected lcore 101 as core 29 on socket 0 00:06:05.117 EAL: Detected lcore 102 as core 30 on socket 0 00:06:05.117 EAL: Detected lcore 103 as core 31 on socket 0 00:06:05.117 EAL: Detected lcore 104 as core 32 on socket 0 00:06:05.117 EAL: Detected lcore 105 as core 33 on socket 0 00:06:05.117 EAL: Detected lcore 106 as core 34 on socket 0 00:06:05.117 EAL: Detected lcore 107 as core 35 on socket 0 00:06:05.117 EAL: Detected lcore 108 as core 0 on socket 1 00:06:05.117 EAL: Detected lcore 109 as core 1 on socket 1 00:06:05.117 EAL: Detected lcore 110 as core 2 on socket 1 00:06:05.117 EAL: Detected lcore 111 as core 3 on socket 1 00:06:05.117 EAL: Detected lcore 112 as core 4 on socket 1 00:06:05.117 EAL: Detected lcore 113 as core 5 on socket 1 00:06:05.117 EAL: Detected lcore 114 as core 6 on socket 1 00:06:05.117 EAL: Detected lcore 115 as core 7 on socket 1 00:06:05.117 EAL: Detected lcore 116 as core 8 on socket 1 00:06:05.117 EAL: Detected lcore 117 as core 9 on socket 1 00:06:05.117 EAL: Detected lcore 118 as core 10 on socket 1 00:06:05.117 EAL: Detected lcore 119 as core 11 on socket 1 00:06:05.117 EAL: Detected lcore 120 as core 12 on socket 1 00:06:05.117 EAL: Detected lcore 121 as core 13 on socket 1 00:06:05.117 EAL: Detected lcore 122 as core 14 on socket 1 00:06:05.117 EAL: Detected lcore 123 as core 15 on socket 1 00:06:05.117 EAL: Detected lcore 124 as core 16 on socket 1 00:06:05.117 EAL: Detected lcore 125 as core 17 on socket 1 00:06:05.117 EAL: Detected lcore 126 as core 18 on socket 1 00:06:05.117 EAL: Detected lcore 127 as core 19 on socket 1 00:06:05.117 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:05.117 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:05.117 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:05.117 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:05.117 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:05.117 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:05.117 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:05.117 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:05.117 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:05.117 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:05.117 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:05.117 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:05.117 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:05.117 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:05.117 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:05.117 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:05.117 EAL: Maximum logical cores by configuration: 128 00:06:05.117 EAL: Detected CPU lcores: 128 00:06:05.117 EAL: Detected NUMA nodes: 2 00:06:05.117 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:05.117 EAL: Detected shared linkage of DPDK 00:06:05.117 EAL: No shared files mode enabled, IPC will be disabled 00:06:05.117 EAL: Bus pci wants IOVA as 'DC' 00:06:05.117 EAL: Buses did not request a specific IOVA mode. 00:06:05.117 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:05.117 EAL: Selected IOVA mode 'VA' 00:06:05.117 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.117 EAL: Probing VFIO support... 00:06:05.117 EAL: IOMMU type 1 (Type 1) is supported 00:06:05.117 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:05.117 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:05.117 EAL: VFIO support initialized 00:06:05.117 EAL: Ask a virtual area of 0x2e000 bytes 00:06:05.117 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:05.117 EAL: Setting up physically contiguous memory... 00:06:05.117 EAL: Setting maximum number of open files to 524288 00:06:05.117 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:05.117 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:05.117 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:05.117 EAL: Ask a virtual area of 0x61000 bytes 00:06:05.117 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:05.117 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:05.117 EAL: Ask a virtual area of 0x400000000 bytes 00:06:05.117 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:05.117 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:05.117 EAL: Ask a virtual area of 0x61000 bytes 00:06:05.117 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:05.117 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:05.117 EAL: Ask a virtual area of 0x400000000 bytes 00:06:05.117 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:05.117 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:05.117 EAL: Ask a virtual area of 0x61000 bytes 00:06:05.117 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:05.118 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:05.118 EAL: Ask a virtual area of 0x400000000 bytes 00:06:05.118 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:05.118 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:05.118 EAL: Ask a virtual area of 0x61000 bytes 00:06:05.118 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:05.118 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:05.118 EAL: Ask a virtual area of 0x400000000 bytes 00:06:05.118 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:05.118 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:05.118 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:05.118 EAL: Ask a virtual area of 0x61000 bytes 00:06:05.118 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:05.118 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:05.118 EAL: Ask a virtual area of 0x400000000 bytes 00:06:05.118 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:05.118 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:05.118 EAL: Ask a virtual area of 0x61000 bytes 00:06:05.118 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:05.118 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:05.118 EAL: Ask a virtual area of 0x400000000 bytes 00:06:05.118 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:05.118 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:05.118 EAL: Ask a virtual area of 0x61000 bytes 00:06:05.118 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:05.118 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:05.118 EAL: Ask a virtual area of 0x400000000 bytes 00:06:05.118 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:05.118 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:05.118 EAL: Ask a virtual area of 0x61000 bytes 00:06:05.118 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:05.118 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:05.118 EAL: Ask a virtual area of 0x400000000 bytes 00:06:05.118 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:05.118 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:05.118 EAL: Hugepages will be freed exactly as allocated. 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: TSC frequency is ~2400000 KHz 00:06:05.118 EAL: Main lcore 0 is ready (tid=7f6f4c3f6a00;cpuset=[0]) 00:06:05.118 EAL: Trying to obtain current memory policy. 00:06:05.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.118 EAL: Restoring previous memory policy: 0 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was expanded by 2MB 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:05.118 EAL: Mem event callback 'spdk:(nil)' registered 00:06:05.118 00:06:05.118 00:06:05.118 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.118 http://cunit.sourceforge.net/ 00:06:05.118 00:06:05.118 00:06:05.118 Suite: components_suite 00:06:05.118 Test: vtophys_malloc_test ...passed 00:06:05.118 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:05.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.118 EAL: Restoring previous memory policy: 4 00:06:05.118 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was expanded by 4MB 00:06:05.118 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was shrunk by 4MB 00:06:05.118 EAL: Trying to obtain current memory policy. 00:06:05.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.118 EAL: Restoring previous memory policy: 4 00:06:05.118 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was expanded by 6MB 00:06:05.118 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was shrunk by 6MB 00:06:05.118 EAL: Trying to obtain current memory policy. 00:06:05.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.118 EAL: Restoring previous memory policy: 4 00:06:05.118 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was expanded by 10MB 00:06:05.118 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was shrunk by 10MB 00:06:05.118 EAL: Trying to obtain current memory policy. 00:06:05.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.118 EAL: Restoring previous memory policy: 4 00:06:05.118 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was expanded by 18MB 00:06:05.118 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was shrunk by 18MB 00:06:05.118 EAL: Trying to obtain current memory policy. 00:06:05.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.118 EAL: Restoring previous memory policy: 4 00:06:05.118 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was expanded by 34MB 00:06:05.118 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was shrunk by 34MB 00:06:05.118 EAL: Trying to obtain current memory policy. 00:06:05.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.118 EAL: Restoring previous memory policy: 4 00:06:05.118 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was expanded by 66MB 00:06:05.118 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was shrunk by 66MB 00:06:05.118 EAL: Trying to obtain current memory policy. 00:06:05.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.118 EAL: Restoring previous memory policy: 4 00:06:05.118 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was expanded by 130MB 00:06:05.118 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was shrunk by 130MB 00:06:05.118 EAL: Trying to obtain current memory policy. 00:06:05.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.118 EAL: Restoring previous memory policy: 4 00:06:05.118 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was expanded by 258MB 00:06:05.118 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.118 EAL: request: mp_malloc_sync 00:06:05.118 EAL: No shared files mode enabled, IPC is disabled 00:06:05.118 EAL: Heap on socket 0 was shrunk by 258MB 00:06:05.118 EAL: Trying to obtain current memory policy. 00:06:05.118 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.437 EAL: Restoring previous memory policy: 4 00:06:05.437 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.437 EAL: request: mp_malloc_sync 00:06:05.437 EAL: No shared files mode enabled, IPC is disabled 00:06:05.437 EAL: Heap on socket 0 was expanded by 514MB 00:06:05.437 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.437 EAL: request: mp_malloc_sync 00:06:05.437 EAL: No shared files mode enabled, IPC is disabled 00:06:05.437 EAL: Heap on socket 0 was shrunk by 514MB 00:06:05.437 EAL: Trying to obtain current memory policy. 00:06:05.437 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.437 EAL: Restoring previous memory policy: 4 00:06:05.437 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.437 EAL: request: mp_malloc_sync 00:06:05.437 EAL: No shared files mode enabled, IPC is disabled 00:06:05.437 EAL: Heap on socket 0 was expanded by 1026MB 00:06:05.698 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.698 EAL: request: mp_malloc_sync 00:06:05.698 EAL: No shared files mode enabled, IPC is disabled 00:06:05.698 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:05.698 passed 00:06:05.698 00:06:05.698 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.698 suites 1 1 n/a 0 0 00:06:05.698 tests 2 2 2 0 0 00:06:05.698 asserts 497 497 497 0 n/a 00:06:05.698 00:06:05.698 Elapsed time = 0.648 seconds 00:06:05.698 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.698 EAL: request: mp_malloc_sync 00:06:05.698 EAL: No shared files mode enabled, IPC is disabled 00:06:05.698 EAL: Heap on socket 0 was shrunk by 2MB 00:06:05.698 EAL: No shared files mode enabled, IPC is disabled 00:06:05.698 EAL: No shared files mode enabled, IPC is disabled 00:06:05.698 EAL: No shared files mode enabled, IPC is disabled 00:06:05.698 00:06:05.698 real 0m0.770s 00:06:05.698 user 0m0.413s 00:06:05.698 sys 0m0.333s 00:06:05.698 15:11:15 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.698 15:11:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:05.698 ************************************ 00:06:05.698 END TEST env_vtophys 00:06:05.698 ************************************ 00:06:05.698 15:11:15 env -- common/autotest_common.sh@1142 -- # return 0 00:06:05.698 15:11:15 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:05.698 15:11:15 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.698 15:11:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.698 15:11:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.698 ************************************ 00:06:05.698 START TEST env_pci 00:06:05.698 ************************************ 00:06:05.698 15:11:15 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:05.698 00:06:05.698 00:06:05.698 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.698 http://cunit.sourceforge.net/ 00:06:05.698 00:06:05.698 00:06:05.698 Suite: pci 00:06:05.698 Test: pci_hook ...[2024-07-15 15:11:15.313188] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 475444 has claimed it 00:06:05.959 EAL: Cannot find device (10000:00:01.0) 00:06:05.959 EAL: Failed to attach device on primary process 00:06:05.959 passed 00:06:05.959 00:06:05.959 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.959 suites 1 1 n/a 0 0 00:06:05.959 tests 1 1 1 0 0 00:06:05.959 asserts 25 25 25 0 n/a 00:06:05.959 00:06:05.959 Elapsed time = 0.031 seconds 00:06:05.959 00:06:05.959 real 0m0.052s 00:06:05.959 user 0m0.018s 00:06:05.959 sys 0m0.034s 00:06:05.959 15:11:15 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.959 15:11:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:05.959 ************************************ 00:06:05.959 END TEST env_pci 00:06:05.959 ************************************ 00:06:05.959 15:11:15 env -- common/autotest_common.sh@1142 -- # return 0 00:06:05.959 15:11:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:05.959 15:11:15 env -- env/env.sh@15 -- # uname 00:06:05.959 15:11:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:05.959 15:11:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:05.959 15:11:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:05.959 15:11:15 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:05.959 15:11:15 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.959 15:11:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.959 ************************************ 00:06:05.959 START TEST env_dpdk_post_init 00:06:05.959 ************************************ 00:06:05.959 15:11:15 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:05.959 EAL: Detected CPU lcores: 128 00:06:05.959 EAL: Detected NUMA nodes: 2 00:06:05.959 EAL: Detected shared linkage of DPDK 00:06:05.959 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:05.959 EAL: Selected IOVA mode 'VA' 00:06:05.959 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.959 EAL: VFIO support initialized 00:06:05.959 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:05.959 EAL: Using IOMMU type 1 (Type 1) 00:06:06.220 EAL: Ignore mapping IO port bar(1) 00:06:06.220 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:06.481 EAL: Ignore mapping IO port bar(1) 00:06:06.481 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:06.741 EAL: Ignore mapping IO port bar(1) 00:06:06.741 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:06.741 EAL: Ignore mapping IO port bar(1) 00:06:07.002 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:07.002 EAL: Ignore mapping IO port bar(1) 00:06:07.263 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:07.263 EAL: Ignore mapping IO port bar(1) 00:06:07.523 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:07.523 EAL: Ignore mapping IO port bar(1) 00:06:07.523 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:07.782 EAL: Ignore mapping IO port bar(1) 00:06:07.782 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:08.042 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:08.302 EAL: Ignore mapping IO port bar(1) 00:06:08.302 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:08.302 EAL: Ignore mapping IO port bar(1) 00:06:08.564 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:08.564 EAL: Ignore mapping IO port bar(1) 00:06:08.824 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:08.824 EAL: Ignore mapping IO port bar(1) 00:06:09.085 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:09.085 EAL: Ignore mapping IO port bar(1) 00:06:09.085 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:09.345 EAL: Ignore mapping IO port bar(1) 00:06:09.345 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:09.606 EAL: Ignore mapping IO port bar(1) 00:06:09.606 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:09.868 EAL: Ignore mapping IO port bar(1) 00:06:09.868 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:09.868 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:09.868 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:09.868 Starting DPDK initialization... 00:06:09.868 Starting SPDK post initialization... 00:06:09.868 SPDK NVMe probe 00:06:09.868 Attaching to 0000:65:00.0 00:06:09.868 Attached to 0000:65:00.0 00:06:09.868 Cleaning up... 00:06:11.780 00:06:11.780 real 0m5.722s 00:06:11.780 user 0m0.192s 00:06:11.780 sys 0m0.067s 00:06:11.780 15:11:21 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.780 15:11:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:11.780 ************************************ 00:06:11.780 END TEST env_dpdk_post_init 00:06:11.780 ************************************ 00:06:11.780 15:11:21 env -- common/autotest_common.sh@1142 -- # return 0 00:06:11.780 15:11:21 env -- env/env.sh@26 -- # uname 00:06:11.780 15:11:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:11.780 15:11:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:11.780 15:11:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.780 15:11:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.780 15:11:21 env -- common/autotest_common.sh@10 -- # set +x 00:06:11.780 ************************************ 00:06:11.780 START TEST env_mem_callbacks 00:06:11.780 ************************************ 00:06:11.780 15:11:21 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:11.780 EAL: Detected CPU lcores: 128 00:06:11.780 EAL: Detected NUMA nodes: 2 00:06:11.780 EAL: Detected shared linkage of DPDK 00:06:11.780 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:11.780 EAL: Selected IOVA mode 'VA' 00:06:11.780 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.780 EAL: VFIO support initialized 00:06:11.780 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:11.780 00:06:11.780 00:06:11.780 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.780 http://cunit.sourceforge.net/ 00:06:11.780 00:06:11.780 00:06:11.780 Suite: memory 00:06:11.780 Test: test ... 00:06:11.780 register 0x200000200000 2097152 00:06:11.780 malloc 3145728 00:06:11.780 register 0x200000400000 4194304 00:06:11.780 buf 0x200000500000 len 3145728 PASSED 00:06:11.780 malloc 64 00:06:11.780 buf 0x2000004fff40 len 64 PASSED 00:06:11.780 malloc 4194304 00:06:11.780 register 0x200000800000 6291456 00:06:11.780 buf 0x200000a00000 len 4194304 PASSED 00:06:11.780 free 0x200000500000 3145728 00:06:11.780 free 0x2000004fff40 64 00:06:11.780 unregister 0x200000400000 4194304 PASSED 00:06:11.780 free 0x200000a00000 4194304 00:06:11.780 unregister 0x200000800000 6291456 PASSED 00:06:11.780 malloc 8388608 00:06:11.780 register 0x200000400000 10485760 00:06:11.780 buf 0x200000600000 len 8388608 PASSED 00:06:11.780 free 0x200000600000 8388608 00:06:11.780 unregister 0x200000400000 10485760 PASSED 00:06:11.780 passed 00:06:11.780 00:06:11.780 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.780 suites 1 1 n/a 0 0 00:06:11.780 tests 1 1 1 0 0 00:06:11.780 asserts 15 15 15 0 n/a 00:06:11.780 00:06:11.780 Elapsed time = 0.004 seconds 00:06:11.780 00:06:11.780 real 0m0.060s 00:06:11.780 user 0m0.021s 00:06:11.780 sys 0m0.039s 00:06:11.780 15:11:21 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.780 15:11:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:11.780 ************************************ 00:06:11.780 END TEST env_mem_callbacks 00:06:11.780 ************************************ 00:06:11.780 15:11:21 env -- common/autotest_common.sh@1142 -- # return 0 00:06:11.780 00:06:11.780 real 0m7.314s 00:06:11.780 user 0m1.026s 00:06:11.780 sys 0m0.820s 00:06:11.780 15:11:21 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.780 15:11:21 env -- common/autotest_common.sh@10 -- # set +x 00:06:11.780 ************************************ 00:06:11.780 END TEST env 00:06:11.780 ************************************ 00:06:11.780 15:11:21 -- common/autotest_common.sh@1142 -- # return 0 00:06:11.780 15:11:21 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:11.780 15:11:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.781 15:11:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.781 15:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:12.042 ************************************ 00:06:12.042 START TEST rpc 00:06:12.042 ************************************ 00:06:12.042 15:11:21 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:12.042 * Looking for test storage... 00:06:12.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:12.042 15:11:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=476889 00:06:12.042 15:11:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.042 15:11:21 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:12.042 15:11:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 476889 00:06:12.042 15:11:21 rpc -- common/autotest_common.sh@829 -- # '[' -z 476889 ']' 00:06:12.042 15:11:21 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.042 15:11:21 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.042 15:11:21 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.042 15:11:21 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.042 15:11:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.042 [2024-07-15 15:11:21.565387] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:12.042 [2024-07-15 15:11:21.565449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476889 ] 00:06:12.042 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.042 [2024-07-15 15:11:21.636001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.303 [2024-07-15 15:11:21.708971] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:12.303 [2024-07-15 15:11:21.709014] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 476889' to capture a snapshot of events at runtime. 00:06:12.303 [2024-07-15 15:11:21.709021] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:12.303 [2024-07-15 15:11:21.709028] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:12.303 [2024-07-15 15:11:21.709034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid476889 for offline analysis/debug. 00:06:12.303 [2024-07-15 15:11:21.709055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.877 15:11:22 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.877 15:11:22 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:12.877 15:11:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:12.877 15:11:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:12.877 15:11:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:12.877 15:11:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:12.877 15:11:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.877 15:11:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.877 15:11:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.877 ************************************ 00:06:12.877 START TEST rpc_integrity 00:06:12.877 ************************************ 00:06:12.877 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:12.877 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:12.877 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.877 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.877 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.877 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:12.877 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:12.877 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:12.877 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:12.877 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.877 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.877 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.877 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:12.877 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:12.877 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.877 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.877 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.877 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:12.877 { 00:06:12.877 "name": "Malloc0", 00:06:12.877 "aliases": [ 00:06:12.877 "77fe9800-e53a-415c-8d17-2cb8a4e03098" 00:06:12.877 ], 00:06:12.877 "product_name": "Malloc disk", 00:06:12.877 "block_size": 512, 00:06:12.877 "num_blocks": 16384, 00:06:12.877 "uuid": "77fe9800-e53a-415c-8d17-2cb8a4e03098", 00:06:12.877 "assigned_rate_limits": { 00:06:12.877 "rw_ios_per_sec": 0, 00:06:12.877 "rw_mbytes_per_sec": 0, 00:06:12.877 "r_mbytes_per_sec": 0, 00:06:12.877 "w_mbytes_per_sec": 0 00:06:12.877 }, 00:06:12.877 "claimed": false, 00:06:12.877 "zoned": false, 00:06:12.877 "supported_io_types": { 00:06:12.877 "read": true, 00:06:12.877 "write": true, 00:06:12.877 "unmap": true, 00:06:12.877 "flush": true, 00:06:12.877 "reset": true, 00:06:12.877 "nvme_admin": false, 00:06:12.877 "nvme_io": false, 00:06:12.877 "nvme_io_md": false, 00:06:12.877 "write_zeroes": true, 00:06:12.877 "zcopy": true, 00:06:12.877 "get_zone_info": false, 00:06:12.877 "zone_management": false, 00:06:12.877 "zone_append": false, 00:06:12.877 "compare": false, 00:06:12.877 "compare_and_write": false, 00:06:12.877 "abort": true, 00:06:12.877 "seek_hole": false, 00:06:12.877 "seek_data": false, 00:06:12.877 "copy": true, 00:06:12.877 "nvme_iov_md": false 00:06:12.877 }, 00:06:12.877 "memory_domains": [ 00:06:12.877 { 00:06:12.877 "dma_device_id": "system", 00:06:12.877 "dma_device_type": 1 00:06:12.877 }, 00:06:12.877 { 00:06:12.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.877 "dma_device_type": 2 00:06:12.877 } 00:06:12.877 ], 00:06:12.877 "driver_specific": {} 00:06:12.877 } 00:06:12.877 ]' 00:06:12.877 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:12.877 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:12.877 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:12.877 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.877 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.139 [2024-07-15 15:11:22.501904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:13.139 [2024-07-15 15:11:22.501938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:13.140 [2024-07-15 15:11:22.501951] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e64cb0 00:06:13.140 [2024-07-15 15:11:22.501958] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:13.140 [2024-07-15 15:11:22.503307] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:13.140 [2024-07-15 15:11:22.503328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:13.140 Passthru0 00:06:13.140 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.140 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:13.140 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.140 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.140 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.140 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:13.140 { 00:06:13.140 "name": "Malloc0", 00:06:13.140 "aliases": [ 00:06:13.140 "77fe9800-e53a-415c-8d17-2cb8a4e03098" 00:06:13.140 ], 00:06:13.140 "product_name": "Malloc disk", 00:06:13.140 "block_size": 512, 00:06:13.140 "num_blocks": 16384, 00:06:13.140 "uuid": "77fe9800-e53a-415c-8d17-2cb8a4e03098", 00:06:13.140 "assigned_rate_limits": { 00:06:13.140 "rw_ios_per_sec": 0, 00:06:13.140 "rw_mbytes_per_sec": 0, 00:06:13.140 "r_mbytes_per_sec": 0, 00:06:13.140 "w_mbytes_per_sec": 0 00:06:13.140 }, 00:06:13.140 "claimed": true, 00:06:13.140 "claim_type": "exclusive_write", 00:06:13.140 "zoned": false, 00:06:13.140 "supported_io_types": { 00:06:13.140 "read": true, 00:06:13.140 "write": true, 00:06:13.140 "unmap": true, 00:06:13.140 "flush": true, 00:06:13.140 "reset": true, 00:06:13.140 "nvme_admin": false, 00:06:13.140 "nvme_io": false, 00:06:13.140 "nvme_io_md": false, 00:06:13.140 "write_zeroes": true, 00:06:13.140 "zcopy": true, 00:06:13.140 "get_zone_info": false, 00:06:13.140 "zone_management": false, 00:06:13.140 "zone_append": false, 00:06:13.140 "compare": false, 00:06:13.140 "compare_and_write": false, 00:06:13.140 "abort": true, 00:06:13.140 "seek_hole": false, 00:06:13.140 "seek_data": false, 00:06:13.140 "copy": true, 00:06:13.140 "nvme_iov_md": false 00:06:13.140 }, 00:06:13.140 "memory_domains": [ 00:06:13.140 { 00:06:13.140 "dma_device_id": "system", 00:06:13.140 "dma_device_type": 1 00:06:13.140 }, 00:06:13.140 { 00:06:13.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.140 "dma_device_type": 2 00:06:13.140 } 00:06:13.140 ], 00:06:13.140 "driver_specific": {} 00:06:13.140 }, 00:06:13.140 { 00:06:13.140 "name": "Passthru0", 00:06:13.140 "aliases": [ 00:06:13.140 "0b6d24ab-0434-5117-8bba-069a65712789" 00:06:13.140 ], 00:06:13.140 "product_name": "passthru", 00:06:13.140 "block_size": 512, 00:06:13.140 "num_blocks": 16384, 00:06:13.140 "uuid": "0b6d24ab-0434-5117-8bba-069a65712789", 00:06:13.140 "assigned_rate_limits": { 00:06:13.140 "rw_ios_per_sec": 0, 00:06:13.140 "rw_mbytes_per_sec": 0, 00:06:13.140 "r_mbytes_per_sec": 0, 00:06:13.140 "w_mbytes_per_sec": 0 00:06:13.140 }, 00:06:13.140 "claimed": false, 00:06:13.140 "zoned": false, 00:06:13.140 "supported_io_types": { 00:06:13.140 "read": true, 00:06:13.140 "write": true, 00:06:13.140 "unmap": true, 00:06:13.140 "flush": true, 00:06:13.140 "reset": true, 00:06:13.140 "nvme_admin": false, 00:06:13.140 "nvme_io": false, 00:06:13.140 "nvme_io_md": false, 00:06:13.140 "write_zeroes": true, 00:06:13.140 "zcopy": true, 00:06:13.140 "get_zone_info": false, 00:06:13.140 "zone_management": false, 00:06:13.140 "zone_append": false, 00:06:13.140 "compare": false, 00:06:13.140 "compare_and_write": false, 00:06:13.140 "abort": true, 00:06:13.140 "seek_hole": false, 00:06:13.140 "seek_data": false, 00:06:13.140 "copy": true, 00:06:13.140 "nvme_iov_md": false 00:06:13.140 }, 00:06:13.140 "memory_domains": [ 00:06:13.140 { 00:06:13.140 "dma_device_id": "system", 00:06:13.140 "dma_device_type": 1 00:06:13.140 }, 00:06:13.140 { 00:06:13.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.140 "dma_device_type": 2 00:06:13.140 } 00:06:13.140 ], 00:06:13.140 "driver_specific": { 00:06:13.140 "passthru": { 00:06:13.140 "name": "Passthru0", 00:06:13.140 "base_bdev_name": "Malloc0" 00:06:13.140 } 00:06:13.140 } 00:06:13.140 } 00:06:13.140 ]' 00:06:13.140 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:13.140 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:13.140 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:13.140 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.140 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.140 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.140 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:13.140 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.140 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.140 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.140 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:13.140 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.140 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.140 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.140 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:13.140 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:13.140 15:11:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:13.140 00:06:13.140 real 0m0.293s 00:06:13.140 user 0m0.185s 00:06:13.140 sys 0m0.043s 00:06:13.140 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.140 15:11:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.140 ************************************ 00:06:13.140 END TEST rpc_integrity 00:06:13.140 ************************************ 00:06:13.140 15:11:22 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:13.140 15:11:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:13.140 15:11:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.140 15:11:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.140 15:11:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.140 ************************************ 00:06:13.140 START TEST rpc_plugins 00:06:13.140 ************************************ 00:06:13.140 15:11:22 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:13.140 15:11:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:13.140 15:11:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.140 15:11:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:13.140 15:11:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.140 15:11:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:13.140 15:11:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:13.140 15:11:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.141 15:11:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:13.141 15:11:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.141 15:11:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:13.141 { 00:06:13.141 "name": "Malloc1", 00:06:13.141 "aliases": [ 00:06:13.141 "5d3f3b02-e0ed-4c30-8184-651a1ce92090" 00:06:13.141 ], 00:06:13.141 "product_name": "Malloc disk", 00:06:13.141 "block_size": 4096, 00:06:13.141 "num_blocks": 256, 00:06:13.141 "uuid": "5d3f3b02-e0ed-4c30-8184-651a1ce92090", 00:06:13.141 "assigned_rate_limits": { 00:06:13.141 "rw_ios_per_sec": 0, 00:06:13.141 "rw_mbytes_per_sec": 0, 00:06:13.141 "r_mbytes_per_sec": 0, 00:06:13.141 "w_mbytes_per_sec": 0 00:06:13.141 }, 00:06:13.141 "claimed": false, 00:06:13.141 "zoned": false, 00:06:13.141 "supported_io_types": { 00:06:13.141 "read": true, 00:06:13.141 "write": true, 00:06:13.141 "unmap": true, 00:06:13.141 "flush": true, 00:06:13.141 "reset": true, 00:06:13.141 "nvme_admin": false, 00:06:13.141 "nvme_io": false, 00:06:13.141 "nvme_io_md": false, 00:06:13.141 "write_zeroes": true, 00:06:13.141 "zcopy": true, 00:06:13.141 "get_zone_info": false, 00:06:13.141 "zone_management": false, 00:06:13.141 "zone_append": false, 00:06:13.141 "compare": false, 00:06:13.141 "compare_and_write": false, 00:06:13.141 "abort": true, 00:06:13.141 "seek_hole": false, 00:06:13.141 "seek_data": false, 00:06:13.141 "copy": true, 00:06:13.141 "nvme_iov_md": false 00:06:13.141 }, 00:06:13.141 "memory_domains": [ 00:06:13.141 { 00:06:13.141 "dma_device_id": "system", 00:06:13.141 "dma_device_type": 1 00:06:13.141 }, 00:06:13.141 { 00:06:13.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.141 "dma_device_type": 2 00:06:13.141 } 00:06:13.141 ], 00:06:13.141 "driver_specific": {} 00:06:13.141 } 00:06:13.141 ]' 00:06:13.141 15:11:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:13.401 15:11:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:13.401 15:11:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:13.401 15:11:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.401 15:11:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:13.401 15:11:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.401 15:11:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:13.401 15:11:22 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.401 15:11:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:13.401 15:11:22 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.401 15:11:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:13.401 15:11:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:13.402 15:11:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:13.402 00:06:13.402 real 0m0.146s 00:06:13.402 user 0m0.093s 00:06:13.402 sys 0m0.021s 00:06:13.402 15:11:22 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.402 15:11:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:13.402 ************************************ 00:06:13.402 END TEST rpc_plugins 00:06:13.402 ************************************ 00:06:13.402 15:11:22 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:13.402 15:11:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:13.402 15:11:22 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.402 15:11:22 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.402 15:11:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.402 ************************************ 00:06:13.402 START TEST rpc_trace_cmd_test 00:06:13.402 ************************************ 00:06:13.402 15:11:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:13.402 15:11:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:13.402 15:11:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:13.402 15:11:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.402 15:11:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.402 15:11:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.402 15:11:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:13.402 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid476889", 00:06:13.402 "tpoint_group_mask": "0x8", 00:06:13.402 "iscsi_conn": { 00:06:13.402 "mask": "0x2", 00:06:13.402 "tpoint_mask": "0x0" 00:06:13.402 }, 00:06:13.402 "scsi": { 00:06:13.402 "mask": "0x4", 00:06:13.402 "tpoint_mask": "0x0" 00:06:13.402 }, 00:06:13.402 "bdev": { 00:06:13.402 "mask": "0x8", 00:06:13.402 "tpoint_mask": "0xffffffffffffffff" 00:06:13.402 }, 00:06:13.402 "nvmf_rdma": { 00:06:13.402 "mask": "0x10", 00:06:13.402 "tpoint_mask": "0x0" 00:06:13.402 }, 00:06:13.402 "nvmf_tcp": { 00:06:13.402 "mask": "0x20", 00:06:13.402 "tpoint_mask": "0x0" 00:06:13.402 }, 00:06:13.402 "ftl": { 00:06:13.402 "mask": "0x40", 00:06:13.402 "tpoint_mask": "0x0" 00:06:13.402 }, 00:06:13.402 "blobfs": { 00:06:13.402 "mask": "0x80", 00:06:13.402 "tpoint_mask": "0x0" 00:06:13.402 }, 00:06:13.402 "dsa": { 00:06:13.402 "mask": "0x200", 00:06:13.402 "tpoint_mask": "0x0" 00:06:13.402 }, 00:06:13.402 "thread": { 00:06:13.402 "mask": "0x400", 00:06:13.402 "tpoint_mask": "0x0" 00:06:13.402 }, 00:06:13.402 "nvme_pcie": { 00:06:13.402 "mask": "0x800", 00:06:13.402 "tpoint_mask": "0x0" 00:06:13.402 }, 00:06:13.402 "iaa": { 00:06:13.402 "mask": "0x1000", 00:06:13.402 "tpoint_mask": "0x0" 00:06:13.402 }, 00:06:13.402 "nvme_tcp": { 00:06:13.402 "mask": "0x2000", 00:06:13.402 "tpoint_mask": "0x0" 00:06:13.402 }, 00:06:13.402 "bdev_nvme": { 00:06:13.402 "mask": "0x4000", 00:06:13.402 "tpoint_mask": "0x0" 00:06:13.402 }, 00:06:13.402 "sock": { 00:06:13.402 "mask": "0x8000", 00:06:13.402 "tpoint_mask": "0x0" 00:06:13.402 } 00:06:13.402 }' 00:06:13.402 15:11:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:13.402 15:11:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:13.402 15:11:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:13.663 15:11:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:13.663 15:11:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:13.663 15:11:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:13.663 15:11:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:13.663 15:11:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:13.663 15:11:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:13.663 15:11:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:13.663 00:06:13.663 real 0m0.225s 00:06:13.663 user 0m0.190s 00:06:13.663 sys 0m0.024s 00:06:13.663 15:11:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.663 15:11:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.663 ************************************ 00:06:13.663 END TEST rpc_trace_cmd_test 00:06:13.663 ************************************ 00:06:13.663 15:11:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:13.663 15:11:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:13.663 15:11:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:13.663 15:11:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:13.663 15:11:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.663 15:11:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.663 15:11:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.663 ************************************ 00:06:13.663 START TEST rpc_daemon_integrity 00:06:13.663 ************************************ 00:06:13.663 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:13.663 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:13.663 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.664 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.664 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.664 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:13.664 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:13.664 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:13.664 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:13.664 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.664 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.925 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.925 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:13.925 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:13.925 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.925 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.925 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.925 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:13.925 { 00:06:13.925 "name": "Malloc2", 00:06:13.925 "aliases": [ 00:06:13.925 "ce4ccfed-9838-4b87-a1d5-eaa779e48fbe" 00:06:13.925 ], 00:06:13.925 "product_name": "Malloc disk", 00:06:13.925 "block_size": 512, 00:06:13.925 "num_blocks": 16384, 00:06:13.925 "uuid": "ce4ccfed-9838-4b87-a1d5-eaa779e48fbe", 00:06:13.925 "assigned_rate_limits": { 00:06:13.925 "rw_ios_per_sec": 0, 00:06:13.925 "rw_mbytes_per_sec": 0, 00:06:13.925 "r_mbytes_per_sec": 0, 00:06:13.925 "w_mbytes_per_sec": 0 00:06:13.925 }, 00:06:13.925 "claimed": false, 00:06:13.925 "zoned": false, 00:06:13.925 "supported_io_types": { 00:06:13.925 "read": true, 00:06:13.925 "write": true, 00:06:13.925 "unmap": true, 00:06:13.925 "flush": true, 00:06:13.925 "reset": true, 00:06:13.925 "nvme_admin": false, 00:06:13.925 "nvme_io": false, 00:06:13.925 "nvme_io_md": false, 00:06:13.925 "write_zeroes": true, 00:06:13.925 "zcopy": true, 00:06:13.925 "get_zone_info": false, 00:06:13.925 "zone_management": false, 00:06:13.925 "zone_append": false, 00:06:13.925 "compare": false, 00:06:13.925 "compare_and_write": false, 00:06:13.925 "abort": true, 00:06:13.925 "seek_hole": false, 00:06:13.925 "seek_data": false, 00:06:13.925 "copy": true, 00:06:13.925 "nvme_iov_md": false 00:06:13.925 }, 00:06:13.925 "memory_domains": [ 00:06:13.925 { 00:06:13.925 "dma_device_id": "system", 00:06:13.925 "dma_device_type": 1 00:06:13.925 }, 00:06:13.925 { 00:06:13.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.925 "dma_device_type": 2 00:06:13.925 } 00:06:13.925 ], 00:06:13.925 "driver_specific": {} 00:06:13.925 } 00:06:13.925 ]' 00:06:13.925 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:13.925 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:13.925 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:13.925 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.925 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.925 [2024-07-15 15:11:23.360231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:13.925 [2024-07-15 15:11:23.360262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:13.925 [2024-07-15 15:11:23.360276] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e5de60 00:06:13.925 [2024-07-15 15:11:23.360284] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:13.925 [2024-07-15 15:11:23.361497] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:13.925 [2024-07-15 15:11:23.361516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:13.925 Passthru0 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:13.926 { 00:06:13.926 "name": "Malloc2", 00:06:13.926 "aliases": [ 00:06:13.926 "ce4ccfed-9838-4b87-a1d5-eaa779e48fbe" 00:06:13.926 ], 00:06:13.926 "product_name": "Malloc disk", 00:06:13.926 "block_size": 512, 00:06:13.926 "num_blocks": 16384, 00:06:13.926 "uuid": "ce4ccfed-9838-4b87-a1d5-eaa779e48fbe", 00:06:13.926 "assigned_rate_limits": { 00:06:13.926 "rw_ios_per_sec": 0, 00:06:13.926 "rw_mbytes_per_sec": 0, 00:06:13.926 "r_mbytes_per_sec": 0, 00:06:13.926 "w_mbytes_per_sec": 0 00:06:13.926 }, 00:06:13.926 "claimed": true, 00:06:13.926 "claim_type": "exclusive_write", 00:06:13.926 "zoned": false, 00:06:13.926 "supported_io_types": { 00:06:13.926 "read": true, 00:06:13.926 "write": true, 00:06:13.926 "unmap": true, 00:06:13.926 "flush": true, 00:06:13.926 "reset": true, 00:06:13.926 "nvme_admin": false, 00:06:13.926 "nvme_io": false, 00:06:13.926 "nvme_io_md": false, 00:06:13.926 "write_zeroes": true, 00:06:13.926 "zcopy": true, 00:06:13.926 "get_zone_info": false, 00:06:13.926 "zone_management": false, 00:06:13.926 "zone_append": false, 00:06:13.926 "compare": false, 00:06:13.926 "compare_and_write": false, 00:06:13.926 "abort": true, 00:06:13.926 "seek_hole": false, 00:06:13.926 "seek_data": false, 00:06:13.926 "copy": true, 00:06:13.926 "nvme_iov_md": false 00:06:13.926 }, 00:06:13.926 "memory_domains": [ 00:06:13.926 { 00:06:13.926 "dma_device_id": "system", 00:06:13.926 "dma_device_type": 1 00:06:13.926 }, 00:06:13.926 { 00:06:13.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.926 "dma_device_type": 2 00:06:13.926 } 00:06:13.926 ], 00:06:13.926 "driver_specific": {} 00:06:13.926 }, 00:06:13.926 { 00:06:13.926 "name": "Passthru0", 00:06:13.926 "aliases": [ 00:06:13.926 "697b710b-7c72-5833-809b-bf1dfab5f342" 00:06:13.926 ], 00:06:13.926 "product_name": "passthru", 00:06:13.926 "block_size": 512, 00:06:13.926 "num_blocks": 16384, 00:06:13.926 "uuid": "697b710b-7c72-5833-809b-bf1dfab5f342", 00:06:13.926 "assigned_rate_limits": { 00:06:13.926 "rw_ios_per_sec": 0, 00:06:13.926 "rw_mbytes_per_sec": 0, 00:06:13.926 "r_mbytes_per_sec": 0, 00:06:13.926 "w_mbytes_per_sec": 0 00:06:13.926 }, 00:06:13.926 "claimed": false, 00:06:13.926 "zoned": false, 00:06:13.926 "supported_io_types": { 00:06:13.926 "read": true, 00:06:13.926 "write": true, 00:06:13.926 "unmap": true, 00:06:13.926 "flush": true, 00:06:13.926 "reset": true, 00:06:13.926 "nvme_admin": false, 00:06:13.926 "nvme_io": false, 00:06:13.926 "nvme_io_md": false, 00:06:13.926 "write_zeroes": true, 00:06:13.926 "zcopy": true, 00:06:13.926 "get_zone_info": false, 00:06:13.926 "zone_management": false, 00:06:13.926 "zone_append": false, 00:06:13.926 "compare": false, 00:06:13.926 "compare_and_write": false, 00:06:13.926 "abort": true, 00:06:13.926 "seek_hole": false, 00:06:13.926 "seek_data": false, 00:06:13.926 "copy": true, 00:06:13.926 "nvme_iov_md": false 00:06:13.926 }, 00:06:13.926 "memory_domains": [ 00:06:13.926 { 00:06:13.926 "dma_device_id": "system", 00:06:13.926 "dma_device_type": 1 00:06:13.926 }, 00:06:13.926 { 00:06:13.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.926 "dma_device_type": 2 00:06:13.926 } 00:06:13.926 ], 00:06:13.926 "driver_specific": { 00:06:13.926 "passthru": { 00:06:13.926 "name": "Passthru0", 00:06:13.926 "base_bdev_name": "Malloc2" 00:06:13.926 } 00:06:13.926 } 00:06:13.926 } 00:06:13.926 ]' 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:13.926 00:06:13.926 real 0m0.297s 00:06:13.926 user 0m0.198s 00:06:13.926 sys 0m0.035s 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.926 15:11:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.926 ************************************ 00:06:13.926 END TEST rpc_daemon_integrity 00:06:13.926 ************************************ 00:06:14.187 15:11:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:14.187 15:11:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:14.187 15:11:23 rpc -- rpc/rpc.sh@84 -- # killprocess 476889 00:06:14.187 15:11:23 rpc -- common/autotest_common.sh@948 -- # '[' -z 476889 ']' 00:06:14.187 15:11:23 rpc -- common/autotest_common.sh@952 -- # kill -0 476889 00:06:14.187 15:11:23 rpc -- common/autotest_common.sh@953 -- # uname 00:06:14.187 15:11:23 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.187 15:11:23 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 476889 00:06:14.187 15:11:23 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.187 15:11:23 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.187 15:11:23 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 476889' 00:06:14.187 killing process with pid 476889 00:06:14.187 15:11:23 rpc -- common/autotest_common.sh@967 -- # kill 476889 00:06:14.187 15:11:23 rpc -- common/autotest_common.sh@972 -- # wait 476889 00:06:14.477 00:06:14.477 real 0m2.401s 00:06:14.477 user 0m3.144s 00:06:14.477 sys 0m0.675s 00:06:14.477 15:11:23 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.477 15:11:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.477 ************************************ 00:06:14.477 END TEST rpc 00:06:14.477 ************************************ 00:06:14.477 15:11:23 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.477 15:11:23 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:14.477 15:11:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.477 15:11:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.477 15:11:23 -- common/autotest_common.sh@10 -- # set +x 00:06:14.477 ************************************ 00:06:14.477 START TEST skip_rpc 00:06:14.477 ************************************ 00:06:14.477 15:11:23 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:14.477 * Looking for test storage... 00:06:14.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:14.477 15:11:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:14.477 15:11:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:14.477 15:11:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:14.477 15:11:23 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.477 15:11:23 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.477 15:11:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.477 ************************************ 00:06:14.477 START TEST skip_rpc 00:06:14.477 ************************************ 00:06:14.477 15:11:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:14.477 15:11:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=477415 00:06:14.477 15:11:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.477 15:11:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:14.477 15:11:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:14.477 [2024-07-15 15:11:24.079170] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:14.477 [2024-07-15 15:11:24.079223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477415 ] 00:06:14.743 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.743 [2024-07-15 15:11:24.142952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.743 [2024-07-15 15:11:24.208425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 477415 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 477415 ']' 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 477415 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 477415 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 477415' 00:06:20.043 killing process with pid 477415 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 477415 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 477415 00:06:20.043 00:06:20.043 real 0m5.278s 00:06:20.043 user 0m5.071s 00:06:20.043 sys 0m0.239s 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.043 15:11:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.043 ************************************ 00:06:20.043 END TEST skip_rpc 00:06:20.043 ************************************ 00:06:20.043 15:11:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:20.043 15:11:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:20.043 15:11:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.043 15:11:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.043 15:11:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.043 ************************************ 00:06:20.043 START TEST skip_rpc_with_json 00:06:20.043 ************************************ 00:06:20.043 15:11:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:20.043 15:11:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:20.043 15:11:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=478573 00:06:20.043 15:11:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.043 15:11:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 478573 00:06:20.043 15:11:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.043 15:11:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 478573 ']' 00:06:20.043 15:11:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.043 15:11:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.043 15:11:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.044 15:11:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.044 15:11:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:20.044 [2024-07-15 15:11:29.435067] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:20.044 [2024-07-15 15:11:29.435124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478573 ] 00:06:20.044 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.044 [2024-07-15 15:11:29.502779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.044 [2024-07-15 15:11:29.574902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.615 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.615 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:20.615 15:11:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:20.615 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.615 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:20.615 [2024-07-15 15:11:30.201793] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:20.615 request: 00:06:20.615 { 00:06:20.615 "trtype": "tcp", 00:06:20.615 "method": "nvmf_get_transports", 00:06:20.615 "req_id": 1 00:06:20.615 } 00:06:20.615 Got JSON-RPC error response 00:06:20.615 response: 00:06:20.615 { 00:06:20.615 "code": -19, 00:06:20.615 "message": "No such device" 00:06:20.615 } 00:06:20.615 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:20.615 15:11:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:20.615 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.615 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:20.615 [2024-07-15 15:11:30.213920] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.615 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.615 15:11:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:20.615 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.615 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:20.876 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.876 15:11:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:20.876 { 00:06:20.876 "subsystems": [ 00:06:20.876 { 00:06:20.876 "subsystem": "vfio_user_target", 00:06:20.876 "config": null 00:06:20.876 }, 00:06:20.876 { 00:06:20.876 "subsystem": "keyring", 00:06:20.876 "config": [] 00:06:20.876 }, 00:06:20.876 { 00:06:20.876 "subsystem": "iobuf", 00:06:20.876 "config": [ 00:06:20.876 { 00:06:20.876 "method": "iobuf_set_options", 00:06:20.876 "params": { 00:06:20.876 "small_pool_count": 8192, 00:06:20.876 "large_pool_count": 1024, 00:06:20.876 "small_bufsize": 8192, 00:06:20.876 "large_bufsize": 135168 00:06:20.876 } 00:06:20.876 } 00:06:20.876 ] 00:06:20.876 }, 00:06:20.876 { 00:06:20.876 "subsystem": "sock", 00:06:20.876 "config": [ 00:06:20.876 { 00:06:20.876 "method": "sock_set_default_impl", 00:06:20.876 "params": { 00:06:20.876 "impl_name": "posix" 00:06:20.876 } 00:06:20.876 }, 00:06:20.876 { 00:06:20.876 "method": "sock_impl_set_options", 00:06:20.876 "params": { 00:06:20.876 "impl_name": "ssl", 00:06:20.876 "recv_buf_size": 4096, 00:06:20.876 "send_buf_size": 4096, 00:06:20.876 "enable_recv_pipe": true, 00:06:20.876 "enable_quickack": false, 00:06:20.876 "enable_placement_id": 0, 00:06:20.876 "enable_zerocopy_send_server": true, 00:06:20.876 "enable_zerocopy_send_client": false, 00:06:20.876 "zerocopy_threshold": 0, 00:06:20.876 "tls_version": 0, 00:06:20.876 "enable_ktls": false 00:06:20.876 } 00:06:20.876 }, 00:06:20.876 { 00:06:20.876 "method": "sock_impl_set_options", 00:06:20.876 "params": { 00:06:20.876 "impl_name": "posix", 00:06:20.876 "recv_buf_size": 2097152, 00:06:20.876 "send_buf_size": 2097152, 00:06:20.876 "enable_recv_pipe": true, 00:06:20.876 "enable_quickack": false, 00:06:20.876 "enable_placement_id": 0, 00:06:20.876 "enable_zerocopy_send_server": true, 00:06:20.876 "enable_zerocopy_send_client": false, 00:06:20.876 "zerocopy_threshold": 0, 00:06:20.876 "tls_version": 0, 00:06:20.876 "enable_ktls": false 00:06:20.876 } 00:06:20.876 } 00:06:20.876 ] 00:06:20.876 }, 00:06:20.876 { 00:06:20.876 "subsystem": "vmd", 00:06:20.876 "config": [] 00:06:20.876 }, 00:06:20.876 { 00:06:20.876 "subsystem": "accel", 00:06:20.876 "config": [ 00:06:20.876 { 00:06:20.876 "method": "accel_set_options", 00:06:20.876 "params": { 00:06:20.876 "small_cache_size": 128, 00:06:20.876 "large_cache_size": 16, 00:06:20.876 "task_count": 2048, 00:06:20.876 "sequence_count": 2048, 00:06:20.876 "buf_count": 2048 00:06:20.876 } 00:06:20.876 } 00:06:20.876 ] 00:06:20.876 }, 00:06:20.876 { 00:06:20.876 "subsystem": "bdev", 00:06:20.876 "config": [ 00:06:20.876 { 00:06:20.876 "method": "bdev_set_options", 00:06:20.876 "params": { 00:06:20.876 "bdev_io_pool_size": 65535, 00:06:20.876 "bdev_io_cache_size": 256, 00:06:20.876 "bdev_auto_examine": true, 00:06:20.876 "iobuf_small_cache_size": 128, 00:06:20.876 "iobuf_large_cache_size": 16 00:06:20.876 } 00:06:20.876 }, 00:06:20.876 { 00:06:20.876 "method": "bdev_raid_set_options", 00:06:20.876 "params": { 00:06:20.876 "process_window_size_kb": 1024 00:06:20.876 } 00:06:20.876 }, 00:06:20.876 { 00:06:20.876 "method": "bdev_iscsi_set_options", 00:06:20.876 "params": { 00:06:20.876 "timeout_sec": 30 00:06:20.876 } 00:06:20.876 }, 00:06:20.876 { 00:06:20.876 "method": "bdev_nvme_set_options", 00:06:20.876 "params": { 00:06:20.876 "action_on_timeout": "none", 00:06:20.876 "timeout_us": 0, 00:06:20.876 "timeout_admin_us": 0, 00:06:20.876 "keep_alive_timeout_ms": 10000, 00:06:20.876 "arbitration_burst": 0, 00:06:20.876 "low_priority_weight": 0, 00:06:20.876 "medium_priority_weight": 0, 00:06:20.876 "high_priority_weight": 0, 00:06:20.876 "nvme_adminq_poll_period_us": 10000, 00:06:20.876 "nvme_ioq_poll_period_us": 0, 00:06:20.876 "io_queue_requests": 0, 00:06:20.876 "delay_cmd_submit": true, 00:06:20.876 "transport_retry_count": 4, 00:06:20.877 "bdev_retry_count": 3, 00:06:20.877 "transport_ack_timeout": 0, 00:06:20.877 "ctrlr_loss_timeout_sec": 0, 00:06:20.877 "reconnect_delay_sec": 0, 00:06:20.877 "fast_io_fail_timeout_sec": 0, 00:06:20.877 "disable_auto_failback": false, 00:06:20.877 "generate_uuids": false, 00:06:20.877 "transport_tos": 0, 00:06:20.877 "nvme_error_stat": false, 00:06:20.877 "rdma_srq_size": 0, 00:06:20.877 "io_path_stat": false, 00:06:20.877 "allow_accel_sequence": false, 00:06:20.877 "rdma_max_cq_size": 0, 00:06:20.877 "rdma_cm_event_timeout_ms": 0, 00:06:20.877 "dhchap_digests": [ 00:06:20.877 "sha256", 00:06:20.877 "sha384", 00:06:20.877 "sha512" 00:06:20.877 ], 00:06:20.877 "dhchap_dhgroups": [ 00:06:20.877 "null", 00:06:20.877 "ffdhe2048", 00:06:20.877 "ffdhe3072", 00:06:20.877 "ffdhe4096", 00:06:20.877 "ffdhe6144", 00:06:20.877 "ffdhe8192" 00:06:20.877 ] 00:06:20.877 } 00:06:20.877 }, 00:06:20.877 { 00:06:20.877 "method": "bdev_nvme_set_hotplug", 00:06:20.877 "params": { 00:06:20.877 "period_us": 100000, 00:06:20.877 "enable": false 00:06:20.877 } 00:06:20.877 }, 00:06:20.877 { 00:06:20.877 "method": "bdev_wait_for_examine" 00:06:20.877 } 00:06:20.877 ] 00:06:20.877 }, 00:06:20.877 { 00:06:20.877 "subsystem": "scsi", 00:06:20.877 "config": null 00:06:20.877 }, 00:06:20.877 { 00:06:20.877 "subsystem": "scheduler", 00:06:20.877 "config": [ 00:06:20.877 { 00:06:20.877 "method": "framework_set_scheduler", 00:06:20.877 "params": { 00:06:20.877 "name": "static" 00:06:20.877 } 00:06:20.877 } 00:06:20.877 ] 00:06:20.877 }, 00:06:20.877 { 00:06:20.877 "subsystem": "vhost_scsi", 00:06:20.877 "config": [] 00:06:20.877 }, 00:06:20.877 { 00:06:20.877 "subsystem": "vhost_blk", 00:06:20.877 "config": [] 00:06:20.877 }, 00:06:20.877 { 00:06:20.877 "subsystem": "ublk", 00:06:20.877 "config": [] 00:06:20.877 }, 00:06:20.877 { 00:06:20.877 "subsystem": "nbd", 00:06:20.877 "config": [] 00:06:20.877 }, 00:06:20.877 { 00:06:20.877 "subsystem": "nvmf", 00:06:20.877 "config": [ 00:06:20.877 { 00:06:20.877 "method": "nvmf_set_config", 00:06:20.877 "params": { 00:06:20.877 "discovery_filter": "match_any", 00:06:20.877 "admin_cmd_passthru": { 00:06:20.877 "identify_ctrlr": false 00:06:20.877 } 00:06:20.877 } 00:06:20.877 }, 00:06:20.877 { 00:06:20.877 "method": "nvmf_set_max_subsystems", 00:06:20.877 "params": { 00:06:20.877 "max_subsystems": 1024 00:06:20.877 } 00:06:20.877 }, 00:06:20.877 { 00:06:20.877 "method": "nvmf_set_crdt", 00:06:20.877 "params": { 00:06:20.877 "crdt1": 0, 00:06:20.877 "crdt2": 0, 00:06:20.877 "crdt3": 0 00:06:20.877 } 00:06:20.877 }, 00:06:20.877 { 00:06:20.877 "method": "nvmf_create_transport", 00:06:20.877 "params": { 00:06:20.877 "trtype": "TCP", 00:06:20.877 "max_queue_depth": 128, 00:06:20.877 "max_io_qpairs_per_ctrlr": 127, 00:06:20.877 "in_capsule_data_size": 4096, 00:06:20.877 "max_io_size": 131072, 00:06:20.877 "io_unit_size": 131072, 00:06:20.877 "max_aq_depth": 128, 00:06:20.877 "num_shared_buffers": 511, 00:06:20.877 "buf_cache_size": 4294967295, 00:06:20.877 "dif_insert_or_strip": false, 00:06:20.877 "zcopy": false, 00:06:20.877 "c2h_success": true, 00:06:20.877 "sock_priority": 0, 00:06:20.877 "abort_timeout_sec": 1, 00:06:20.877 "ack_timeout": 0, 00:06:20.877 "data_wr_pool_size": 0 00:06:20.877 } 00:06:20.877 } 00:06:20.877 ] 00:06:20.877 }, 00:06:20.877 { 00:06:20.877 "subsystem": "iscsi", 00:06:20.877 "config": [ 00:06:20.877 { 00:06:20.877 "method": "iscsi_set_options", 00:06:20.877 "params": { 00:06:20.877 "node_base": "iqn.2016-06.io.spdk", 00:06:20.877 "max_sessions": 128, 00:06:20.877 "max_connections_per_session": 2, 00:06:20.877 "max_queue_depth": 64, 00:06:20.877 "default_time2wait": 2, 00:06:20.877 "default_time2retain": 20, 00:06:20.877 "first_burst_length": 8192, 00:06:20.877 "immediate_data": true, 00:06:20.877 "allow_duplicated_isid": false, 00:06:20.877 "error_recovery_level": 0, 00:06:20.877 "nop_timeout": 60, 00:06:20.877 "nop_in_interval": 30, 00:06:20.877 "disable_chap": false, 00:06:20.877 "require_chap": false, 00:06:20.877 "mutual_chap": false, 00:06:20.877 "chap_group": 0, 00:06:20.877 "max_large_datain_per_connection": 64, 00:06:20.877 "max_r2t_per_connection": 4, 00:06:20.877 "pdu_pool_size": 36864, 00:06:20.877 "immediate_data_pool_size": 16384, 00:06:20.877 "data_out_pool_size": 2048 00:06:20.877 } 00:06:20.877 } 00:06:20.877 ] 00:06:20.877 } 00:06:20.877 ] 00:06:20.877 } 00:06:20.877 15:11:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:20.877 15:11:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 478573 00:06:20.877 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 478573 ']' 00:06:20.877 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 478573 00:06:20.877 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:20.877 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.877 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 478573 00:06:20.877 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.877 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.877 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 478573' 00:06:20.877 killing process with pid 478573 00:06:20.877 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 478573 00:06:20.877 15:11:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 478573 00:06:21.138 15:11:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=478791 00:06:21.138 15:11:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:21.138 15:11:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:26.454 15:11:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 478791 00:06:26.454 15:11:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 478791 ']' 00:06:26.454 15:11:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 478791 00:06:26.454 15:11:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:26.454 15:11:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.454 15:11:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 478791 00:06:26.454 15:11:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:26.454 15:11:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:26.454 15:11:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 478791' 00:06:26.454 killing process with pid 478791 00:06:26.454 15:11:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 478791 00:06:26.454 15:11:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 478791 00:06:26.454 15:11:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:26.454 15:11:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:26.454 00:06:26.454 real 0m6.545s 00:06:26.454 user 0m6.401s 00:06:26.454 sys 0m0.547s 00:06:26.454 15:11:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.454 15:11:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:26.454 ************************************ 00:06:26.454 END TEST skip_rpc_with_json 00:06:26.454 ************************************ 00:06:26.454 15:11:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:26.454 15:11:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:26.454 15:11:35 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.454 15:11:35 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.454 15:11:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.454 ************************************ 00:06:26.454 START TEST skip_rpc_with_delay 00:06:26.454 ************************************ 00:06:26.454 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:26.454 15:11:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:26.454 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:26.454 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:26.454 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:26.454 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.454 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:26.454 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.454 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:26.454 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.454 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:26.454 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:26.454 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:26.454 [2024-07-15 15:11:36.064362] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:26.454 [2024-07-15 15:11:36.064454] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:26.715 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:26.715 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.715 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:26.715 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.715 00:06:26.715 real 0m0.076s 00:06:26.715 user 0m0.043s 00:06:26.715 sys 0m0.033s 00:06:26.715 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.715 15:11:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:26.715 ************************************ 00:06:26.715 END TEST skip_rpc_with_delay 00:06:26.715 ************************************ 00:06:26.715 15:11:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:26.715 15:11:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:26.715 15:11:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:26.715 15:11:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:26.715 15:11:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.715 15:11:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.715 15:11:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.715 ************************************ 00:06:26.715 START TEST exit_on_failed_rpc_init 00:06:26.715 ************************************ 00:06:26.715 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:26.715 15:11:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=480066 00:06:26.715 15:11:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 480066 00:06:26.715 15:11:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.715 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 480066 ']' 00:06:26.715 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.715 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.715 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.715 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.715 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:26.715 [2024-07-15 15:11:36.215001] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:26.715 [2024-07-15 15:11:36.215054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480066 ] 00:06:26.715 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.715 [2024-07-15 15:11:36.278576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.975 [2024-07-15 15:11:36.346762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.544 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.544 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:27.544 15:11:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.544 15:11:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:27.544 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:27.544 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:27.544 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.544 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.544 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.544 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.544 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.544 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.544 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.544 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:27.544 15:11:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:27.544 [2024-07-15 15:11:37.050834] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:27.544 [2024-07-15 15:11:37.050891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480192 ] 00:06:27.544 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.544 [2024-07-15 15:11:37.111846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.803 [2024-07-15 15:11:37.175836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.803 [2024-07-15 15:11:37.175900] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:27.803 [2024-07-15 15:11:37.175911] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:27.803 [2024-07-15 15:11:37.175918] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 480066 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 480066 ']' 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 480066 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 480066 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 480066' 00:06:27.803 killing process with pid 480066 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 480066 00:06:27.803 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 480066 00:06:28.064 00:06:28.064 real 0m1.340s 00:06:28.064 user 0m1.582s 00:06:28.064 sys 0m0.357s 00:06:28.064 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.064 15:11:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:28.064 ************************************ 00:06:28.064 END TEST exit_on_failed_rpc_init 00:06:28.064 ************************************ 00:06:28.064 15:11:37 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:28.064 15:11:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:28.064 00:06:28.064 real 0m13.656s 00:06:28.064 user 0m13.249s 00:06:28.064 sys 0m1.466s 00:06:28.064 15:11:37 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.064 15:11:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.064 ************************************ 00:06:28.064 END TEST skip_rpc 00:06:28.064 ************************************ 00:06:28.064 15:11:37 -- common/autotest_common.sh@1142 -- # return 0 00:06:28.064 15:11:37 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:28.064 15:11:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.064 15:11:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.064 15:11:37 -- common/autotest_common.sh@10 -- # set +x 00:06:28.064 ************************************ 00:06:28.064 START TEST rpc_client 00:06:28.064 ************************************ 00:06:28.064 15:11:37 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:28.325 * Looking for test storage... 00:06:28.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:28.325 15:11:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:28.325 OK 00:06:28.325 15:11:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:28.325 00:06:28.325 real 0m0.128s 00:06:28.325 user 0m0.057s 00:06:28.325 sys 0m0.079s 00:06:28.325 15:11:37 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.325 15:11:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:28.325 ************************************ 00:06:28.325 END TEST rpc_client 00:06:28.325 ************************************ 00:06:28.325 15:11:37 -- common/autotest_common.sh@1142 -- # return 0 00:06:28.325 15:11:37 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:28.325 15:11:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.325 15:11:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.325 15:11:37 -- common/autotest_common.sh@10 -- # set +x 00:06:28.325 ************************************ 00:06:28.325 START TEST json_config 00:06:28.325 ************************************ 00:06:28.325 15:11:37 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:28.325 15:11:37 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.325 15:11:37 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.325 15:11:37 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.325 15:11:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.325 15:11:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.325 15:11:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.325 15:11:37 json_config -- paths/export.sh@5 -- # export PATH 00:06:28.325 15:11:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@47 -- # : 0 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:28.325 15:11:37 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:28.325 INFO: JSON configuration test init 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:28.325 15:11:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:28.325 15:11:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.325 15:11:37 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:28.325 15:11:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:28.325 15:11:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.585 15:11:37 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:28.585 15:11:37 json_config -- json_config/common.sh@9 -- # local app=target 00:06:28.585 15:11:37 json_config -- json_config/common.sh@10 -- # shift 00:06:28.585 15:11:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:28.585 15:11:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:28.585 15:11:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:28.586 15:11:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:28.586 15:11:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:28.586 15:11:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=480574 00:06:28.586 15:11:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:28.586 Waiting for target to run... 00:06:28.586 15:11:37 json_config -- json_config/common.sh@25 -- # waitforlisten 480574 /var/tmp/spdk_tgt.sock 00:06:28.586 15:11:37 json_config -- common/autotest_common.sh@829 -- # '[' -z 480574 ']' 00:06:28.586 15:11:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:28.586 15:11:37 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:28.586 15:11:37 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.586 15:11:37 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:28.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:28.586 15:11:37 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.586 15:11:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.586 [2024-07-15 15:11:38.006347] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:28.586 [2024-07-15 15:11:38.006421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480574 ] 00:06:28.586 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.845 [2024-07-15 15:11:38.327448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.845 [2024-07-15 15:11:38.379223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.414 15:11:38 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.414 15:11:38 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:29.414 15:11:38 json_config -- json_config/common.sh@26 -- # echo '' 00:06:29.414 00:06:29.414 15:11:38 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:29.414 15:11:38 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:29.414 15:11:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:29.414 15:11:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.414 15:11:38 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:29.414 15:11:38 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:29.414 15:11:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:29.414 15:11:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.414 15:11:38 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:29.414 15:11:38 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:29.414 15:11:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:29.984 15:11:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:29.984 15:11:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:29.984 15:11:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:29.984 15:11:39 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:29.984 15:11:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:29.984 15:11:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:29.984 15:11:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:29.984 15:11:39 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:29.984 15:11:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:30.244 MallocForNvmf0 00:06:30.244 15:11:39 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:30.245 15:11:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:30.506 MallocForNvmf1 00:06:30.506 15:11:39 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:30.506 15:11:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:30.506 [2024-07-15 15:11:40.030971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.506 15:11:40 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:30.506 15:11:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:30.766 15:11:40 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:30.766 15:11:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:30.766 15:11:40 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:30.766 15:11:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:31.027 15:11:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:31.027 15:11:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:31.027 [2024-07-15 15:11:40.628932] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:31.027 15:11:40 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:31.027 15:11:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.027 15:11:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.287 15:11:40 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:31.287 15:11:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.287 15:11:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.287 15:11:40 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:31.287 15:11:40 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:31.287 15:11:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:31.287 MallocBdevForConfigChangeCheck 00:06:31.287 15:11:40 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:31.287 15:11:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.287 15:11:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.546 15:11:40 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:31.546 15:11:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:31.806 15:11:41 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:31.806 INFO: shutting down applications... 00:06:31.806 15:11:41 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:31.806 15:11:41 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:31.806 15:11:41 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:31.806 15:11:41 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:32.066 Calling clear_iscsi_subsystem 00:06:32.066 Calling clear_nvmf_subsystem 00:06:32.066 Calling clear_nbd_subsystem 00:06:32.066 Calling clear_ublk_subsystem 00:06:32.066 Calling clear_vhost_blk_subsystem 00:06:32.066 Calling clear_vhost_scsi_subsystem 00:06:32.066 Calling clear_bdev_subsystem 00:06:32.067 15:11:41 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:32.067 15:11:41 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:32.067 15:11:41 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:32.067 15:11:41 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:32.067 15:11:41 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:32.067 15:11:41 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:32.326 15:11:41 json_config -- json_config/json_config.sh@345 -- # break 00:06:32.326 15:11:41 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:32.326 15:11:41 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:32.326 15:11:41 json_config -- json_config/common.sh@31 -- # local app=target 00:06:32.326 15:11:41 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:32.326 15:11:41 json_config -- json_config/common.sh@35 -- # [[ -n 480574 ]] 00:06:32.326 15:11:41 json_config -- json_config/common.sh@38 -- # kill -SIGINT 480574 00:06:32.326 15:11:41 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:32.326 15:11:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:32.326 15:11:41 json_config -- json_config/common.sh@41 -- # kill -0 480574 00:06:32.326 15:11:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:32.896 15:11:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:32.896 15:11:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:32.896 15:11:42 json_config -- json_config/common.sh@41 -- # kill -0 480574 00:06:32.896 15:11:42 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:32.896 15:11:42 json_config -- json_config/common.sh@43 -- # break 00:06:32.896 15:11:42 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:32.896 15:11:42 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:32.896 SPDK target shutdown done 00:06:32.896 15:11:42 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:32.896 INFO: relaunching applications... 00:06:32.896 15:11:42 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:32.896 15:11:42 json_config -- json_config/common.sh@9 -- # local app=target 00:06:32.896 15:11:42 json_config -- json_config/common.sh@10 -- # shift 00:06:32.896 15:11:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:32.896 15:11:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:32.896 15:11:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:32.896 15:11:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:32.896 15:11:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:32.896 15:11:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=481442 00:06:32.896 15:11:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:32.896 Waiting for target to run... 00:06:32.896 15:11:42 json_config -- json_config/common.sh@25 -- # waitforlisten 481442 /var/tmp/spdk_tgt.sock 00:06:32.896 15:11:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:32.896 15:11:42 json_config -- common/autotest_common.sh@829 -- # '[' -z 481442 ']' 00:06:32.896 15:11:42 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:32.896 15:11:42 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.896 15:11:42 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:32.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:32.896 15:11:42 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.896 15:11:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:32.896 [2024-07-15 15:11:42.470129] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:32.896 [2024-07-15 15:11:42.470188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481442 ] 00:06:32.896 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.467 [2024-07-15 15:11:42.883138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.467 [2024-07-15 15:11:42.943853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.038 [2024-07-15 15:11:43.441406] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.038 [2024-07-15 15:11:43.473757] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:34.038 15:11:43 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.038 15:11:43 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:34.038 15:11:43 json_config -- json_config/common.sh@26 -- # echo '' 00:06:34.038 00:06:34.038 15:11:43 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:34.038 15:11:43 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:34.038 INFO: Checking if target configuration is the same... 00:06:34.038 15:11:43 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:34.038 15:11:43 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.038 15:11:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.038 + '[' 2 -ne 2 ']' 00:06:34.038 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:34.038 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:34.038 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:34.038 +++ basename /dev/fd/62 00:06:34.038 ++ mktemp /tmp/62.XXX 00:06:34.038 + tmp_file_1=/tmp/62.lge 00:06:34.038 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.038 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:34.038 + tmp_file_2=/tmp/spdk_tgt_config.json.plG 00:06:34.038 + ret=0 00:06:34.038 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.299 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.299 + diff -u /tmp/62.lge /tmp/spdk_tgt_config.json.plG 00:06:34.299 + echo 'INFO: JSON config files are the same' 00:06:34.299 INFO: JSON config files are the same 00:06:34.299 + rm /tmp/62.lge /tmp/spdk_tgt_config.json.plG 00:06:34.299 + exit 0 00:06:34.299 15:11:43 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:34.299 15:11:43 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:34.299 INFO: changing configuration and checking if this can be detected... 00:06:34.299 15:11:43 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:34.299 15:11:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:34.559 15:11:43 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:34.559 15:11:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.559 15:11:43 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.559 + '[' 2 -ne 2 ']' 00:06:34.559 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:34.559 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:34.559 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:34.559 +++ basename /dev/fd/62 00:06:34.559 ++ mktemp /tmp/62.XXX 00:06:34.559 + tmp_file_1=/tmp/62.AbC 00:06:34.559 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:34.559 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:34.559 + tmp_file_2=/tmp/spdk_tgt_config.json.kqO 00:06:34.559 + ret=0 00:06:34.559 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.820 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:34.820 + diff -u /tmp/62.AbC /tmp/spdk_tgt_config.json.kqO 00:06:34.820 + ret=1 00:06:34.820 + echo '=== Start of file: /tmp/62.AbC ===' 00:06:34.820 + cat /tmp/62.AbC 00:06:34.820 + echo '=== End of file: /tmp/62.AbC ===' 00:06:34.820 + echo '' 00:06:34.820 + echo '=== Start of file: /tmp/spdk_tgt_config.json.kqO ===' 00:06:34.820 + cat /tmp/spdk_tgt_config.json.kqO 00:06:34.820 + echo '=== End of file: /tmp/spdk_tgt_config.json.kqO ===' 00:06:34.820 + echo '' 00:06:34.820 + rm /tmp/62.AbC /tmp/spdk_tgt_config.json.kqO 00:06:34.820 + exit 1 00:06:34.820 15:11:44 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:34.820 INFO: configuration change detected. 00:06:34.820 15:11:44 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:34.820 15:11:44 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:34.820 15:11:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:34.820 15:11:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.821 15:11:44 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:34.821 15:11:44 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:34.821 15:11:44 json_config -- json_config/json_config.sh@317 -- # [[ -n 481442 ]] 00:06:34.821 15:11:44 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:34.821 15:11:44 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:34.821 15:11:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:34.821 15:11:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.821 15:11:44 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:34.821 15:11:44 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:34.821 15:11:44 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:34.821 15:11:44 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:34.821 15:11:44 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:34.821 15:11:44 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:34.821 15:11:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:34.821 15:11:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:34.821 15:11:44 json_config -- json_config/json_config.sh@323 -- # killprocess 481442 00:06:34.821 15:11:44 json_config -- common/autotest_common.sh@948 -- # '[' -z 481442 ']' 00:06:34.821 15:11:44 json_config -- common/autotest_common.sh@952 -- # kill -0 481442 00:06:34.821 15:11:44 json_config -- common/autotest_common.sh@953 -- # uname 00:06:34.821 15:11:44 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.821 15:11:44 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 481442 00:06:35.081 15:11:44 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.081 15:11:44 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.081 15:11:44 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 481442' 00:06:35.081 killing process with pid 481442 00:06:35.081 15:11:44 json_config -- common/autotest_common.sh@967 -- # kill 481442 00:06:35.081 15:11:44 json_config -- common/autotest_common.sh@972 -- # wait 481442 00:06:35.341 15:11:44 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:35.341 15:11:44 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:35.341 15:11:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:35.341 15:11:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.341 15:11:44 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:35.341 15:11:44 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:35.341 INFO: Success 00:06:35.341 00:06:35.341 real 0m6.968s 00:06:35.341 user 0m8.147s 00:06:35.341 sys 0m1.847s 00:06:35.341 15:11:44 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.341 15:11:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.341 ************************************ 00:06:35.341 END TEST json_config 00:06:35.341 ************************************ 00:06:35.342 15:11:44 -- common/autotest_common.sh@1142 -- # return 0 00:06:35.342 15:11:44 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:35.342 15:11:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.342 15:11:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.342 15:11:44 -- common/autotest_common.sh@10 -- # set +x 00:06:35.342 ************************************ 00:06:35.342 START TEST json_config_extra_key 00:06:35.342 ************************************ 00:06:35.342 15:11:44 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:35.342 15:11:44 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.342 15:11:44 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.342 15:11:44 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.342 15:11:44 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.342 15:11:44 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.342 15:11:44 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.342 15:11:44 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.342 15:11:44 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:35.342 15:11:44 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:35.342 15:11:44 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:35.342 15:11:44 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:35.342 15:11:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:35.342 15:11:44 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:35.342 15:11:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:35.342 15:11:44 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:35.342 15:11:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:35.342 15:11:44 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:35.342 15:11:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:35.342 15:11:44 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:35.342 15:11:44 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:35.342 15:11:44 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:35.342 INFO: launching applications... 00:06:35.342 15:11:44 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:35.342 15:11:44 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:35.342 15:11:44 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:35.602 15:11:44 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:35.602 15:11:44 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:35.602 15:11:44 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:35.602 15:11:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.602 15:11:44 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.602 15:11:44 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=482212 00:06:35.603 15:11:44 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:35.603 Waiting for target to run... 00:06:35.603 15:11:44 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 482212 /var/tmp/spdk_tgt.sock 00:06:35.603 15:11:44 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 482212 ']' 00:06:35.603 15:11:44 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:35.603 15:11:44 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:35.603 15:11:44 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.603 15:11:44 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:35.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:35.603 15:11:44 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.603 15:11:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:35.603 [2024-07-15 15:11:45.017398] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:35.603 [2024-07-15 15:11:45.017455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482212 ] 00:06:35.603 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.863 [2024-07-15 15:11:45.241893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.863 [2024-07-15 15:11:45.291954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.434 15:11:45 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.434 15:11:45 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:36.434 15:11:45 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:36.434 00:06:36.434 15:11:45 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:36.434 INFO: shutting down applications... 00:06:36.434 15:11:45 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:36.434 15:11:45 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:36.435 15:11:45 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:36.435 15:11:45 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 482212 ]] 00:06:36.435 15:11:45 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 482212 00:06:36.435 15:11:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:36.435 15:11:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:36.435 15:11:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 482212 00:06:36.435 15:11:45 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:36.696 15:11:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:36.696 15:11:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:36.696 15:11:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 482212 00:06:36.696 15:11:46 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:36.696 15:11:46 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:36.696 15:11:46 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:36.696 15:11:46 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:36.696 SPDK target shutdown done 00:06:36.696 15:11:46 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:36.696 Success 00:06:36.696 00:06:36.696 real 0m1.452s 00:06:36.696 user 0m1.169s 00:06:36.696 sys 0m0.326s 00:06:36.696 15:11:46 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.696 15:11:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:36.696 ************************************ 00:06:36.696 END TEST json_config_extra_key 00:06:36.696 ************************************ 00:06:36.958 15:11:46 -- common/autotest_common.sh@1142 -- # return 0 00:06:36.958 15:11:46 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:36.958 15:11:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.958 15:11:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.958 15:11:46 -- common/autotest_common.sh@10 -- # set +x 00:06:36.958 ************************************ 00:06:36.958 START TEST alias_rpc 00:06:36.958 ************************************ 00:06:36.958 15:11:46 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:36.958 * Looking for test storage... 00:06:36.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:36.958 15:11:46 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:36.958 15:11:46 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=482597 00:06:36.958 15:11:46 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 482597 00:06:36.958 15:11:46 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.958 15:11:46 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 482597 ']' 00:06:36.958 15:11:46 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.958 15:11:46 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.958 15:11:46 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.958 15:11:46 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.958 15:11:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.958 [2024-07-15 15:11:46.549592] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:36.958 [2024-07-15 15:11:46.549663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482597 ] 00:06:37.217 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.217 [2024-07-15 15:11:46.617032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.217 [2024-07-15 15:11:46.691368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.788 15:11:47 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.788 15:11:47 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:37.788 15:11:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:38.048 15:11:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 482597 00:06:38.048 15:11:47 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 482597 ']' 00:06:38.048 15:11:47 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 482597 00:06:38.048 15:11:47 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:38.048 15:11:47 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.048 15:11:47 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 482597 00:06:38.048 15:11:47 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.048 15:11:47 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.048 15:11:47 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 482597' 00:06:38.048 killing process with pid 482597 00:06:38.048 15:11:47 alias_rpc -- common/autotest_common.sh@967 -- # kill 482597 00:06:38.048 15:11:47 alias_rpc -- common/autotest_common.sh@972 -- # wait 482597 00:06:38.308 00:06:38.308 real 0m1.365s 00:06:38.308 user 0m1.489s 00:06:38.308 sys 0m0.378s 00:06:38.308 15:11:47 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.308 15:11:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.308 ************************************ 00:06:38.308 END TEST alias_rpc 00:06:38.308 ************************************ 00:06:38.308 15:11:47 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.308 15:11:47 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:38.308 15:11:47 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:38.308 15:11:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.308 15:11:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.308 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:06:38.308 ************************************ 00:06:38.308 START TEST spdkcli_tcp 00:06:38.308 ************************************ 00:06:38.308 15:11:47 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:38.308 * Looking for test storage... 00:06:38.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:38.308 15:11:47 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:38.308 15:11:47 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:38.308 15:11:47 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:38.308 15:11:47 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:38.308 15:11:47 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:38.308 15:11:47 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:38.308 15:11:47 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:38.308 15:11:47 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:38.308 15:11:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.308 15:11:47 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=482884 00:06:38.308 15:11:47 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 482884 00:06:38.308 15:11:47 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:38.308 15:11:47 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 482884 ']' 00:06:38.308 15:11:47 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.308 15:11:47 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.308 15:11:47 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.309 15:11:47 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.309 15:11:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.568 [2024-07-15 15:11:47.985291] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:38.568 [2024-07-15 15:11:47.985367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482884 ] 00:06:38.568 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.568 [2024-07-15 15:11:48.053850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.568 [2024-07-15 15:11:48.131267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.568 [2024-07-15 15:11:48.131273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.139 15:11:48 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.139 15:11:48 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:39.400 15:11:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=483000 00:06:39.400 15:11:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:39.400 15:11:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:39.400 [ 00:06:39.400 "bdev_malloc_delete", 00:06:39.400 "bdev_malloc_create", 00:06:39.400 "bdev_null_resize", 00:06:39.400 "bdev_null_delete", 00:06:39.400 "bdev_null_create", 00:06:39.400 "bdev_nvme_cuse_unregister", 00:06:39.400 "bdev_nvme_cuse_register", 00:06:39.400 "bdev_opal_new_user", 00:06:39.400 "bdev_opal_set_lock_state", 00:06:39.400 "bdev_opal_delete", 00:06:39.400 "bdev_opal_get_info", 00:06:39.400 "bdev_opal_create", 00:06:39.400 "bdev_nvme_opal_revert", 00:06:39.400 "bdev_nvme_opal_init", 00:06:39.400 "bdev_nvme_send_cmd", 00:06:39.400 "bdev_nvme_get_path_iostat", 00:06:39.400 "bdev_nvme_get_mdns_discovery_info", 00:06:39.400 "bdev_nvme_stop_mdns_discovery", 00:06:39.400 "bdev_nvme_start_mdns_discovery", 00:06:39.400 "bdev_nvme_set_multipath_policy", 00:06:39.400 "bdev_nvme_set_preferred_path", 00:06:39.400 "bdev_nvme_get_io_paths", 00:06:39.400 "bdev_nvme_remove_error_injection", 00:06:39.400 "bdev_nvme_add_error_injection", 00:06:39.400 "bdev_nvme_get_discovery_info", 00:06:39.400 "bdev_nvme_stop_discovery", 00:06:39.400 "bdev_nvme_start_discovery", 00:06:39.400 "bdev_nvme_get_controller_health_info", 00:06:39.400 "bdev_nvme_disable_controller", 00:06:39.400 "bdev_nvme_enable_controller", 00:06:39.400 "bdev_nvme_reset_controller", 00:06:39.400 "bdev_nvme_get_transport_statistics", 00:06:39.400 "bdev_nvme_apply_firmware", 00:06:39.400 "bdev_nvme_detach_controller", 00:06:39.400 "bdev_nvme_get_controllers", 00:06:39.400 "bdev_nvme_attach_controller", 00:06:39.400 "bdev_nvme_set_hotplug", 00:06:39.400 "bdev_nvme_set_options", 00:06:39.400 "bdev_passthru_delete", 00:06:39.400 "bdev_passthru_create", 00:06:39.400 "bdev_lvol_set_parent_bdev", 00:06:39.400 "bdev_lvol_set_parent", 00:06:39.400 "bdev_lvol_check_shallow_copy", 00:06:39.400 "bdev_lvol_start_shallow_copy", 00:06:39.400 "bdev_lvol_grow_lvstore", 00:06:39.400 "bdev_lvol_get_lvols", 00:06:39.400 "bdev_lvol_get_lvstores", 00:06:39.400 "bdev_lvol_delete", 00:06:39.400 "bdev_lvol_set_read_only", 00:06:39.400 "bdev_lvol_resize", 00:06:39.400 "bdev_lvol_decouple_parent", 00:06:39.400 "bdev_lvol_inflate", 00:06:39.400 "bdev_lvol_rename", 00:06:39.400 "bdev_lvol_clone_bdev", 00:06:39.400 "bdev_lvol_clone", 00:06:39.400 "bdev_lvol_snapshot", 00:06:39.400 "bdev_lvol_create", 00:06:39.400 "bdev_lvol_delete_lvstore", 00:06:39.400 "bdev_lvol_rename_lvstore", 00:06:39.400 "bdev_lvol_create_lvstore", 00:06:39.400 "bdev_raid_set_options", 00:06:39.400 "bdev_raid_remove_base_bdev", 00:06:39.400 "bdev_raid_add_base_bdev", 00:06:39.400 "bdev_raid_delete", 00:06:39.400 "bdev_raid_create", 00:06:39.400 "bdev_raid_get_bdevs", 00:06:39.400 "bdev_error_inject_error", 00:06:39.400 "bdev_error_delete", 00:06:39.400 "bdev_error_create", 00:06:39.400 "bdev_split_delete", 00:06:39.400 "bdev_split_create", 00:06:39.400 "bdev_delay_delete", 00:06:39.400 "bdev_delay_create", 00:06:39.400 "bdev_delay_update_latency", 00:06:39.400 "bdev_zone_block_delete", 00:06:39.400 "bdev_zone_block_create", 00:06:39.400 "blobfs_create", 00:06:39.400 "blobfs_detect", 00:06:39.400 "blobfs_set_cache_size", 00:06:39.400 "bdev_aio_delete", 00:06:39.400 "bdev_aio_rescan", 00:06:39.400 "bdev_aio_create", 00:06:39.400 "bdev_ftl_set_property", 00:06:39.400 "bdev_ftl_get_properties", 00:06:39.400 "bdev_ftl_get_stats", 00:06:39.400 "bdev_ftl_unmap", 00:06:39.400 "bdev_ftl_unload", 00:06:39.400 "bdev_ftl_delete", 00:06:39.400 "bdev_ftl_load", 00:06:39.400 "bdev_ftl_create", 00:06:39.400 "bdev_virtio_attach_controller", 00:06:39.400 "bdev_virtio_scsi_get_devices", 00:06:39.400 "bdev_virtio_detach_controller", 00:06:39.400 "bdev_virtio_blk_set_hotplug", 00:06:39.400 "bdev_iscsi_delete", 00:06:39.400 "bdev_iscsi_create", 00:06:39.400 "bdev_iscsi_set_options", 00:06:39.400 "accel_error_inject_error", 00:06:39.400 "ioat_scan_accel_module", 00:06:39.400 "dsa_scan_accel_module", 00:06:39.400 "iaa_scan_accel_module", 00:06:39.400 "vfu_virtio_create_scsi_endpoint", 00:06:39.400 "vfu_virtio_scsi_remove_target", 00:06:39.400 "vfu_virtio_scsi_add_target", 00:06:39.400 "vfu_virtio_create_blk_endpoint", 00:06:39.400 "vfu_virtio_delete_endpoint", 00:06:39.400 "keyring_file_remove_key", 00:06:39.400 "keyring_file_add_key", 00:06:39.400 "keyring_linux_set_options", 00:06:39.400 "iscsi_get_histogram", 00:06:39.400 "iscsi_enable_histogram", 00:06:39.400 "iscsi_set_options", 00:06:39.400 "iscsi_get_auth_groups", 00:06:39.400 "iscsi_auth_group_remove_secret", 00:06:39.400 "iscsi_auth_group_add_secret", 00:06:39.400 "iscsi_delete_auth_group", 00:06:39.400 "iscsi_create_auth_group", 00:06:39.400 "iscsi_set_discovery_auth", 00:06:39.400 "iscsi_get_options", 00:06:39.400 "iscsi_target_node_request_logout", 00:06:39.400 "iscsi_target_node_set_redirect", 00:06:39.400 "iscsi_target_node_set_auth", 00:06:39.400 "iscsi_target_node_add_lun", 00:06:39.400 "iscsi_get_stats", 00:06:39.400 "iscsi_get_connections", 00:06:39.400 "iscsi_portal_group_set_auth", 00:06:39.400 "iscsi_start_portal_group", 00:06:39.400 "iscsi_delete_portal_group", 00:06:39.400 "iscsi_create_portal_group", 00:06:39.400 "iscsi_get_portal_groups", 00:06:39.400 "iscsi_delete_target_node", 00:06:39.400 "iscsi_target_node_remove_pg_ig_maps", 00:06:39.400 "iscsi_target_node_add_pg_ig_maps", 00:06:39.400 "iscsi_create_target_node", 00:06:39.400 "iscsi_get_target_nodes", 00:06:39.400 "iscsi_delete_initiator_group", 00:06:39.400 "iscsi_initiator_group_remove_initiators", 00:06:39.400 "iscsi_initiator_group_add_initiators", 00:06:39.400 "iscsi_create_initiator_group", 00:06:39.400 "iscsi_get_initiator_groups", 00:06:39.400 "nvmf_set_crdt", 00:06:39.400 "nvmf_set_config", 00:06:39.400 "nvmf_set_max_subsystems", 00:06:39.400 "nvmf_stop_mdns_prr", 00:06:39.400 "nvmf_publish_mdns_prr", 00:06:39.400 "nvmf_subsystem_get_listeners", 00:06:39.400 "nvmf_subsystem_get_qpairs", 00:06:39.400 "nvmf_subsystem_get_controllers", 00:06:39.400 "nvmf_get_stats", 00:06:39.400 "nvmf_get_transports", 00:06:39.400 "nvmf_create_transport", 00:06:39.400 "nvmf_get_targets", 00:06:39.400 "nvmf_delete_target", 00:06:39.400 "nvmf_create_target", 00:06:39.400 "nvmf_subsystem_allow_any_host", 00:06:39.400 "nvmf_subsystem_remove_host", 00:06:39.400 "nvmf_subsystem_add_host", 00:06:39.400 "nvmf_ns_remove_host", 00:06:39.400 "nvmf_ns_add_host", 00:06:39.400 "nvmf_subsystem_remove_ns", 00:06:39.400 "nvmf_subsystem_add_ns", 00:06:39.400 "nvmf_subsystem_listener_set_ana_state", 00:06:39.400 "nvmf_discovery_get_referrals", 00:06:39.400 "nvmf_discovery_remove_referral", 00:06:39.400 "nvmf_discovery_add_referral", 00:06:39.400 "nvmf_subsystem_remove_listener", 00:06:39.400 "nvmf_subsystem_add_listener", 00:06:39.400 "nvmf_delete_subsystem", 00:06:39.400 "nvmf_create_subsystem", 00:06:39.400 "nvmf_get_subsystems", 00:06:39.400 "env_dpdk_get_mem_stats", 00:06:39.400 "nbd_get_disks", 00:06:39.400 "nbd_stop_disk", 00:06:39.400 "nbd_start_disk", 00:06:39.400 "ublk_recover_disk", 00:06:39.400 "ublk_get_disks", 00:06:39.400 "ublk_stop_disk", 00:06:39.400 "ublk_start_disk", 00:06:39.400 "ublk_destroy_target", 00:06:39.400 "ublk_create_target", 00:06:39.400 "virtio_blk_create_transport", 00:06:39.400 "virtio_blk_get_transports", 00:06:39.400 "vhost_controller_set_coalescing", 00:06:39.400 "vhost_get_controllers", 00:06:39.400 "vhost_delete_controller", 00:06:39.400 "vhost_create_blk_controller", 00:06:39.400 "vhost_scsi_controller_remove_target", 00:06:39.400 "vhost_scsi_controller_add_target", 00:06:39.400 "vhost_start_scsi_controller", 00:06:39.400 "vhost_create_scsi_controller", 00:06:39.400 "thread_set_cpumask", 00:06:39.400 "framework_get_governor", 00:06:39.400 "framework_get_scheduler", 00:06:39.400 "framework_set_scheduler", 00:06:39.400 "framework_get_reactors", 00:06:39.400 "thread_get_io_channels", 00:06:39.400 "thread_get_pollers", 00:06:39.400 "thread_get_stats", 00:06:39.400 "framework_monitor_context_switch", 00:06:39.400 "spdk_kill_instance", 00:06:39.400 "log_enable_timestamps", 00:06:39.400 "log_get_flags", 00:06:39.400 "log_clear_flag", 00:06:39.400 "log_set_flag", 00:06:39.400 "log_get_level", 00:06:39.400 "log_set_level", 00:06:39.400 "log_get_print_level", 00:06:39.400 "log_set_print_level", 00:06:39.400 "framework_enable_cpumask_locks", 00:06:39.400 "framework_disable_cpumask_locks", 00:06:39.400 "framework_wait_init", 00:06:39.400 "framework_start_init", 00:06:39.400 "scsi_get_devices", 00:06:39.400 "bdev_get_histogram", 00:06:39.400 "bdev_enable_histogram", 00:06:39.400 "bdev_set_qos_limit", 00:06:39.400 "bdev_set_qd_sampling_period", 00:06:39.400 "bdev_get_bdevs", 00:06:39.400 "bdev_reset_iostat", 00:06:39.400 "bdev_get_iostat", 00:06:39.400 "bdev_examine", 00:06:39.400 "bdev_wait_for_examine", 00:06:39.400 "bdev_set_options", 00:06:39.400 "notify_get_notifications", 00:06:39.400 "notify_get_types", 00:06:39.400 "accel_get_stats", 00:06:39.400 "accel_set_options", 00:06:39.400 "accel_set_driver", 00:06:39.400 "accel_crypto_key_destroy", 00:06:39.400 "accel_crypto_keys_get", 00:06:39.400 "accel_crypto_key_create", 00:06:39.400 "accel_assign_opc", 00:06:39.400 "accel_get_module_info", 00:06:39.400 "accel_get_opc_assignments", 00:06:39.400 "vmd_rescan", 00:06:39.400 "vmd_remove_device", 00:06:39.400 "vmd_enable", 00:06:39.400 "sock_get_default_impl", 00:06:39.400 "sock_set_default_impl", 00:06:39.400 "sock_impl_set_options", 00:06:39.400 "sock_impl_get_options", 00:06:39.400 "iobuf_get_stats", 00:06:39.400 "iobuf_set_options", 00:06:39.400 "keyring_get_keys", 00:06:39.400 "framework_get_pci_devices", 00:06:39.400 "framework_get_config", 00:06:39.400 "framework_get_subsystems", 00:06:39.400 "vfu_tgt_set_base_path", 00:06:39.400 "trace_get_info", 00:06:39.400 "trace_get_tpoint_group_mask", 00:06:39.400 "trace_disable_tpoint_group", 00:06:39.400 "trace_enable_tpoint_group", 00:06:39.400 "trace_clear_tpoint_mask", 00:06:39.400 "trace_set_tpoint_mask", 00:06:39.400 "spdk_get_version", 00:06:39.400 "rpc_get_methods" 00:06:39.400 ] 00:06:39.400 15:11:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:39.400 15:11:48 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:39.400 15:11:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.400 15:11:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:39.400 15:11:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 482884 00:06:39.400 15:11:48 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 482884 ']' 00:06:39.400 15:11:48 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 482884 00:06:39.400 15:11:48 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:39.400 15:11:48 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.400 15:11:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 482884 00:06:39.400 15:11:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.400 15:11:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.400 15:11:49 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 482884' 00:06:39.400 killing process with pid 482884 00:06:39.400 15:11:49 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 482884 00:06:39.400 15:11:49 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 482884 00:06:39.697 00:06:39.698 real 0m1.398s 00:06:39.698 user 0m2.553s 00:06:39.698 sys 0m0.429s 00:06:39.698 15:11:49 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.698 15:11:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.698 ************************************ 00:06:39.698 END TEST spdkcli_tcp 00:06:39.698 ************************************ 00:06:39.698 15:11:49 -- common/autotest_common.sh@1142 -- # return 0 00:06:39.698 15:11:49 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:39.698 15:11:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.698 15:11:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.698 15:11:49 -- common/autotest_common.sh@10 -- # set +x 00:06:39.698 ************************************ 00:06:39.698 START TEST dpdk_mem_utility 00:06:39.698 ************************************ 00:06:39.698 15:11:49 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:39.966 * Looking for test storage... 00:06:39.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:39.966 15:11:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:39.966 15:11:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=483179 00:06:39.966 15:11:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 483179 00:06:39.966 15:11:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.966 15:11:49 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 483179 ']' 00:06:39.966 15:11:49 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.966 15:11:49 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.966 15:11:49 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.966 15:11:49 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.966 15:11:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:39.966 [2024-07-15 15:11:49.434529] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:39.966 [2024-07-15 15:11:49.434587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483179 ] 00:06:39.966 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.966 [2024-07-15 15:11:49.500678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.966 [2024-07-15 15:11:49.568557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.908 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.908 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:40.908 15:11:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:40.908 15:11:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:40.908 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.908 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.908 { 00:06:40.908 "filename": "/tmp/spdk_mem_dump.txt" 00:06:40.908 } 00:06:40.908 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.908 15:11:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:40.908 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:40.908 1 heaps totaling size 814.000000 MiB 00:06:40.908 size: 814.000000 MiB heap id: 0 00:06:40.908 end heaps---------- 00:06:40.908 8 mempools totaling size 598.116089 MiB 00:06:40.908 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:40.908 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:40.908 size: 84.521057 MiB name: bdev_io_483179 00:06:40.908 size: 51.011292 MiB name: evtpool_483179 00:06:40.908 size: 50.003479 MiB name: msgpool_483179 00:06:40.908 size: 21.763794 MiB name: PDU_Pool 00:06:40.908 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:40.908 size: 0.026123 MiB name: Session_Pool 00:06:40.908 end mempools------- 00:06:40.908 6 memzones totaling size 4.142822 MiB 00:06:40.908 size: 1.000366 MiB name: RG_ring_0_483179 00:06:40.908 size: 1.000366 MiB name: RG_ring_1_483179 00:06:40.908 size: 1.000366 MiB name: RG_ring_4_483179 00:06:40.908 size: 1.000366 MiB name: RG_ring_5_483179 00:06:40.908 size: 0.125366 MiB name: RG_ring_2_483179 00:06:40.908 size: 0.015991 MiB name: RG_ring_3_483179 00:06:40.908 end memzones------- 00:06:40.908 15:11:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:40.908 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:40.908 list of free elements. size: 12.519348 MiB 00:06:40.908 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:40.908 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:40.908 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:40.908 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:40.908 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:40.908 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:40.908 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:40.908 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:40.908 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:40.908 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:40.908 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:40.908 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:40.908 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:40.908 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:40.908 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:40.908 list of standard malloc elements. size: 199.218079 MiB 00:06:40.908 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:40.908 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:40.908 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:40.908 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:40.908 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:40.908 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:40.908 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:40.908 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:40.908 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:40.908 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:40.908 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:40.908 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:40.908 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:40.908 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:40.908 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:40.908 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:40.908 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:40.908 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:40.908 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:40.908 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:40.908 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:40.908 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:40.908 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:40.908 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:40.908 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:40.908 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:40.908 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:40.908 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:40.908 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:40.908 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:40.908 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:40.908 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:40.908 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:40.908 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:40.908 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:40.908 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:40.908 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:40.908 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:40.908 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:40.908 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:40.908 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:40.908 list of memzone associated elements. size: 602.262573 MiB 00:06:40.908 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:40.908 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:40.908 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:40.908 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:40.908 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:40.908 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_483179_0 00:06:40.908 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:40.908 associated memzone info: size: 48.002930 MiB name: MP_evtpool_483179_0 00:06:40.908 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:40.908 associated memzone info: size: 48.002930 MiB name: MP_msgpool_483179_0 00:06:40.908 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:40.908 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:40.908 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:40.908 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:40.908 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:40.908 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_483179 00:06:40.908 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:40.908 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_483179 00:06:40.908 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:40.908 associated memzone info: size: 1.007996 MiB name: MP_evtpool_483179 00:06:40.908 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:40.908 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:40.908 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:40.908 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:40.908 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:40.908 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:40.908 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:40.908 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:40.908 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:40.908 associated memzone info: size: 1.000366 MiB name: RG_ring_0_483179 00:06:40.908 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:40.908 associated memzone info: size: 1.000366 MiB name: RG_ring_1_483179 00:06:40.908 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:40.908 associated memzone info: size: 1.000366 MiB name: RG_ring_4_483179 00:06:40.908 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:40.908 associated memzone info: size: 1.000366 MiB name: RG_ring_5_483179 00:06:40.908 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:40.908 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_483179 00:06:40.908 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:40.908 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:40.908 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:40.908 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:40.908 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:40.908 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:40.908 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:40.908 associated memzone info: size: 0.125366 MiB name: RG_ring_2_483179 00:06:40.908 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:40.908 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:40.908 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:40.908 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:40.908 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:40.908 associated memzone info: size: 0.015991 MiB name: RG_ring_3_483179 00:06:40.909 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:40.909 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:40.909 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:40.909 associated memzone info: size: 0.000183 MiB name: MP_msgpool_483179 00:06:40.909 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:40.909 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_483179 00:06:40.909 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:40.909 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:40.909 15:11:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:40.909 15:11:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 483179 00:06:40.909 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 483179 ']' 00:06:40.909 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 483179 00:06:40.909 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:40.909 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.909 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 483179 00:06:40.909 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.909 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.909 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 483179' 00:06:40.909 killing process with pid 483179 00:06:40.909 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 483179 00:06:40.909 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 483179 00:06:41.170 00:06:41.170 real 0m1.266s 00:06:41.170 user 0m1.331s 00:06:41.170 sys 0m0.359s 00:06:41.170 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.170 15:11:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.170 ************************************ 00:06:41.170 END TEST dpdk_mem_utility 00:06:41.170 ************************************ 00:06:41.170 15:11:50 -- common/autotest_common.sh@1142 -- # return 0 00:06:41.170 15:11:50 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:41.170 15:11:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.170 15:11:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.170 15:11:50 -- common/autotest_common.sh@10 -- # set +x 00:06:41.170 ************************************ 00:06:41.170 START TEST event 00:06:41.170 ************************************ 00:06:41.170 15:11:50 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:41.170 * Looking for test storage... 00:06:41.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:41.170 15:11:50 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:41.170 15:11:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:41.170 15:11:50 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:41.170 15:11:50 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:41.170 15:11:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.170 15:11:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.170 ************************************ 00:06:41.170 START TEST event_perf 00:06:41.170 ************************************ 00:06:41.170 15:11:50 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:41.170 Running I/O for 1 seconds...[2024-07-15 15:11:50.786233] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:41.170 [2024-07-15 15:11:50.786341] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483468 ] 00:06:41.430 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.430 [2024-07-15 15:11:50.873294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.430 [2024-07-15 15:11:50.951334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.430 [2024-07-15 15:11:50.951450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.430 [2024-07-15 15:11:50.951593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.430 [2024-07-15 15:11:50.951594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.811 Running I/O for 1 seconds... 00:06:42.811 lcore 0: 177507 00:06:42.811 lcore 1: 177507 00:06:42.811 lcore 2: 177505 00:06:42.811 lcore 3: 177508 00:06:42.811 done. 00:06:42.811 00:06:42.811 real 0m1.240s 00:06:42.811 user 0m4.139s 00:06:42.811 sys 0m0.095s 00:06:42.811 15:11:52 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.811 15:11:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.811 ************************************ 00:06:42.811 END TEST event_perf 00:06:42.811 ************************************ 00:06:42.811 15:11:52 event -- common/autotest_common.sh@1142 -- # return 0 00:06:42.811 15:11:52 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:42.811 15:11:52 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:42.811 15:11:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.811 15:11:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.811 ************************************ 00:06:42.811 START TEST event_reactor 00:06:42.811 ************************************ 00:06:42.811 15:11:52 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:42.811 [2024-07-15 15:11:52.103308] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:42.811 [2024-07-15 15:11:52.103413] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483820 ] 00:06:42.811 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.811 [2024-07-15 15:11:52.169470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.811 [2024-07-15 15:11:52.234402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.757 test_start 00:06:43.757 oneshot 00:06:43.757 tick 100 00:06:43.757 tick 100 00:06:43.757 tick 250 00:06:43.757 tick 100 00:06:43.757 tick 100 00:06:43.757 tick 100 00:06:43.757 tick 250 00:06:43.757 tick 500 00:06:43.757 tick 100 00:06:43.757 tick 100 00:06:43.757 tick 250 00:06:43.757 tick 100 00:06:43.757 tick 100 00:06:43.757 test_end 00:06:43.757 00:06:43.757 real 0m1.204s 00:06:43.757 user 0m1.131s 00:06:43.757 sys 0m0.068s 00:06:43.757 15:11:53 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.757 15:11:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:43.757 ************************************ 00:06:43.757 END TEST event_reactor 00:06:43.757 ************************************ 00:06:43.757 15:11:53 event -- common/autotest_common.sh@1142 -- # return 0 00:06:43.757 15:11:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:43.757 15:11:53 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:43.757 15:11:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.757 15:11:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.757 ************************************ 00:06:43.757 START TEST event_reactor_perf 00:06:43.757 ************************************ 00:06:43.757 15:11:53 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.039 [2024-07-15 15:11:53.385631] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:44.039 [2024-07-15 15:11:53.385733] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484178 ] 00:06:44.039 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.039 [2024-07-15 15:11:53.451500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.039 [2024-07-15 15:11:53.515815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.978 test_start 00:06:44.978 test_end 00:06:44.978 Performance: 369679 events per second 00:06:44.978 00:06:44.978 real 0m1.202s 00:06:44.978 user 0m1.126s 00:06:44.978 sys 0m0.073s 00:06:44.978 15:11:54 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.978 15:11:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.978 ************************************ 00:06:44.978 END TEST event_reactor_perf 00:06:44.978 ************************************ 00:06:45.237 15:11:54 event -- common/autotest_common.sh@1142 -- # return 0 00:06:45.237 15:11:54 event -- event/event.sh@49 -- # uname -s 00:06:45.237 15:11:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:45.238 15:11:54 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.238 15:11:54 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.238 15:11:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.238 15:11:54 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.238 ************************************ 00:06:45.238 START TEST event_scheduler 00:06:45.238 ************************************ 00:06:45.238 15:11:54 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:45.238 * Looking for test storage... 00:06:45.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:45.238 15:11:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:45.238 15:11:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=484511 00:06:45.238 15:11:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.238 15:11:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:45.238 15:11:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 484511 00:06:45.238 15:11:54 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 484511 ']' 00:06:45.238 15:11:54 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.238 15:11:54 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.238 15:11:54 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.238 15:11:54 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.238 15:11:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.238 [2024-07-15 15:11:54.801070] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:45.238 [2024-07-15 15:11:54.801141] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484511 ] 00:06:45.238 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.238 [2024-07-15 15:11:54.857612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:45.497 [2024-07-15 15:11:54.914847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.497 [2024-07-15 15:11:54.914892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.497 [2024-07-15 15:11:54.915003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.497 [2024-07-15 15:11:54.915005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.497 15:11:54 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.497 15:11:54 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:45.497 15:11:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:45.497 15:11:54 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.497 15:11:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.497 [2024-07-15 15:11:54.995576] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:45.497 [2024-07-15 15:11:54.995588] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:45.497 [2024-07-15 15:11:54.995595] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:45.497 [2024-07-15 15:11:54.995599] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:45.497 [2024-07-15 15:11:54.995603] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:45.497 15:11:54 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.497 15:11:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:45.497 15:11:55 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.497 15:11:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.497 [2024-07-15 15:11:55.050101] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:45.497 15:11:55 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.497 15:11:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:45.497 15:11:55 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.497 15:11:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.497 15:11:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.497 ************************************ 00:06:45.497 START TEST scheduler_create_thread 00:06:45.497 ************************************ 00:06:45.497 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:45.497 15:11:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:45.497 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.497 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.497 2 00:06:45.497 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.497 15:11:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:45.497 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.497 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.497 3 00:06:45.497 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.497 15:11:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:45.497 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.497 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.757 4 00:06:45.757 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.757 15:11:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:45.757 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.757 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.757 5 00:06:45.757 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.757 15:11:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:45.757 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.757 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.757 6 00:06:45.757 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.757 15:11:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:45.758 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.758 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.758 7 00:06:45.758 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.758 15:11:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:45.758 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.758 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.758 8 00:06:45.758 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.758 15:11:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:45.758 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.758 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.758 9 00:06:45.758 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.758 15:11:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:45.758 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.758 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.018 10 00:06:46.018 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.018 15:11:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:46.018 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.018 15:11:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.396 15:11:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.396 15:11:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:47.396 15:11:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:47.396 15:11:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.396 15:11:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.390 15:11:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.390 15:11:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:48.390 15:11:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.390 15:11:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.959 15:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.959 15:11:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:48.959 15:11:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:48.959 15:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.959 15:11:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.899 15:11:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.899 00:06:49.899 real 0m4.222s 00:06:49.899 user 0m0.023s 00:06:49.899 sys 0m0.008s 00:06:49.899 15:11:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.899 15:11:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.899 ************************************ 00:06:49.899 END TEST scheduler_create_thread 00:06:49.899 ************************************ 00:06:49.899 15:11:59 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:49.899 15:11:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:49.899 15:11:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 484511 00:06:49.899 15:11:59 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 484511 ']' 00:06:49.899 15:11:59 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 484511 00:06:49.899 15:11:59 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:49.899 15:11:59 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.899 15:11:59 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 484511 00:06:49.899 15:11:59 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:49.899 15:11:59 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:49.899 15:11:59 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 484511' 00:06:49.899 killing process with pid 484511 00:06:49.899 15:11:59 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 484511 00:06:49.899 15:11:59 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 484511 00:06:50.159 [2024-07-15 15:11:59.587323] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:50.159 00:06:50.159 real 0m5.106s 00:06:50.159 user 0m10.295s 00:06:50.159 sys 0m0.349s 00:06:50.159 15:11:59 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.159 15:11:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.159 ************************************ 00:06:50.159 END TEST event_scheduler 00:06:50.159 ************************************ 00:06:50.419 15:11:59 event -- common/autotest_common.sh@1142 -- # return 0 00:06:50.419 15:11:59 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:50.419 15:11:59 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:50.419 15:11:59 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.419 15:11:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.419 15:11:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.419 ************************************ 00:06:50.419 START TEST app_repeat 00:06:50.419 ************************************ 00:06:50.419 15:11:59 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:50.419 15:11:59 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.419 15:11:59 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.419 15:11:59 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:50.419 15:11:59 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.419 15:11:59 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:50.419 15:11:59 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:50.419 15:11:59 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:50.419 15:11:59 event.app_repeat -- event/event.sh@19 -- # repeat_pid=485599 00:06:50.419 15:11:59 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:50.420 15:11:59 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:50.420 15:11:59 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 485599' 00:06:50.420 Process app_repeat pid: 485599 00:06:50.420 15:11:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.420 15:11:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:50.420 spdk_app_start Round 0 00:06:50.420 15:11:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 485599 /var/tmp/spdk-nbd.sock 00:06:50.420 15:11:59 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 485599 ']' 00:06:50.420 15:11:59 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.420 15:11:59 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.420 15:11:59 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.420 15:11:59 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.420 15:11:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.420 [2024-07-15 15:11:59.879879] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:06:50.420 [2024-07-15 15:11:59.879953] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485599 ] 00:06:50.420 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.420 [2024-07-15 15:11:59.946647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.420 [2024-07-15 15:12:00.020419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.420 [2024-07-15 15:12:00.020426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.360 15:12:00 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.360 15:12:00 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:51.360 15:12:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.360 Malloc0 00:06:51.360 15:12:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.620 Malloc1 00:06:51.620 15:12:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.620 /dev/nbd0 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.620 15:12:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:51.620 15:12:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:51.620 15:12:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:51.620 15:12:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:51.620 15:12:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:51.620 15:12:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:51.620 15:12:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:51.620 15:12:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:51.620 15:12:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.620 1+0 records in 00:06:51.620 1+0 records out 00:06:51.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243851 s, 16.8 MB/s 00:06:51.620 15:12:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.620 15:12:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:51.620 15:12:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.620 15:12:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:51.620 15:12:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.620 15:12:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:51.879 /dev/nbd1 00:06:51.879 15:12:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.879 15:12:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.879 15:12:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:51.879 15:12:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:51.879 15:12:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:51.879 15:12:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:51.879 15:12:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:51.879 15:12:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:51.879 15:12:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:51.879 15:12:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:51.879 15:12:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.879 1+0 records in 00:06:51.879 1+0 records out 00:06:51.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275607 s, 14.9 MB/s 00:06:51.879 15:12:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.880 15:12:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:51.880 15:12:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.880 15:12:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:51.880 15:12:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:51.880 15:12:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.880 15:12:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.880 15:12:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.880 15:12:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.880 15:12:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.140 { 00:06:52.140 "nbd_device": "/dev/nbd0", 00:06:52.140 "bdev_name": "Malloc0" 00:06:52.140 }, 00:06:52.140 { 00:06:52.140 "nbd_device": "/dev/nbd1", 00:06:52.140 "bdev_name": "Malloc1" 00:06:52.140 } 00:06:52.140 ]' 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.140 { 00:06:52.140 "nbd_device": "/dev/nbd0", 00:06:52.140 "bdev_name": "Malloc0" 00:06:52.140 }, 00:06:52.140 { 00:06:52.140 "nbd_device": "/dev/nbd1", 00:06:52.140 "bdev_name": "Malloc1" 00:06:52.140 } 00:06:52.140 ]' 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.140 /dev/nbd1' 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.140 /dev/nbd1' 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.140 256+0 records in 00:06:52.140 256+0 records out 00:06:52.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011818 s, 88.7 MB/s 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.140 256+0 records in 00:06:52.140 256+0 records out 00:06:52.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164997 s, 63.6 MB/s 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.140 256+0 records in 00:06:52.140 256+0 records out 00:06:52.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172078 s, 60.9 MB/s 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.140 15:12:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.401 15:12:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.401 15:12:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.401 15:12:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.401 15:12:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.401 15:12:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.401 15:12:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.401 15:12:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.401 15:12:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.401 15:12:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.401 15:12:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.401 15:12:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.401 15:12:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.401 15:12:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.401 15:12:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.401 15:12:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.401 15:12:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.401 15:12:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.401 15:12:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.660 15:12:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.660 15:12:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.660 15:12:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.660 15:12:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.660 15:12:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.660 15:12:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.660 15:12:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.660 15:12:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.660 15:12:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.660 15:12:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:52.660 15:12:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.660 15:12:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.660 15:12:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.660 15:12:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.660 15:12:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.660 15:12:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.920 15:12:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:52.920 [2024-07-15 15:12:02.521722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.180 [2024-07-15 15:12:02.583897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.180 [2024-07-15 15:12:02.583908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.180 [2024-07-15 15:12:02.615263] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.180 [2024-07-15 15:12:02.615292] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.477 15:12:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:56.477 15:12:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:56.477 spdk_app_start Round 1 00:06:56.477 15:12:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 485599 /var/tmp/spdk-nbd.sock 00:06:56.477 15:12:05 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 485599 ']' 00:06:56.477 15:12:05 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.477 15:12:05 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.477 15:12:05 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.477 15:12:05 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.477 15:12:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.477 15:12:05 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.477 15:12:05 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:56.477 15:12:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.477 Malloc0 00:06:56.477 15:12:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.477 Malloc1 00:06:56.477 15:12:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.477 15:12:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.477 15:12:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.477 15:12:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:56.477 15:12:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.477 15:12:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:56.477 15:12:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.477 15:12:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.477 15:12:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.477 15:12:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.477 15:12:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.477 15:12:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.477 15:12:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:56.477 15:12:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.477 15:12:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.477 15:12:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:56.477 /dev/nbd0 00:06:56.477 15:12:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.477 15:12:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.477 15:12:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:56.477 15:12:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:56.477 15:12:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:56.477 15:12:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:56.477 15:12:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:56.477 15:12:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:56.477 15:12:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:56.477 15:12:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:56.477 15:12:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.477 1+0 records in 00:06:56.477 1+0 records out 00:06:56.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294901 s, 13.9 MB/s 00:06:56.477 15:12:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.477 15:12:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:56.477 15:12:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.477 15:12:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:56.477 15:12:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:56.477 15:12:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.477 15:12:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.477 15:12:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:56.782 /dev/nbd1 00:06:56.782 15:12:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:56.782 15:12:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:56.782 15:12:06 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:56.782 15:12:06 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:56.782 15:12:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:56.782 15:12:06 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:56.782 15:12:06 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:56.782 15:12:06 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:56.782 15:12:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:56.782 15:12:06 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:56.782 15:12:06 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.782 1+0 records in 00:06:56.782 1+0 records out 00:06:56.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173082 s, 23.7 MB/s 00:06:56.782 15:12:06 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.782 15:12:06 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:56.782 15:12:06 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:56.782 15:12:06 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:56.782 15:12:06 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:56.782 15:12:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.782 15:12:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.782 15:12:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.782 15:12:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.782 15:12:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:57.043 { 00:06:57.043 "nbd_device": "/dev/nbd0", 00:06:57.043 "bdev_name": "Malloc0" 00:06:57.043 }, 00:06:57.043 { 00:06:57.043 "nbd_device": "/dev/nbd1", 00:06:57.043 "bdev_name": "Malloc1" 00:06:57.043 } 00:06:57.043 ]' 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:57.043 { 00:06:57.043 "nbd_device": "/dev/nbd0", 00:06:57.043 "bdev_name": "Malloc0" 00:06:57.043 }, 00:06:57.043 { 00:06:57.043 "nbd_device": "/dev/nbd1", 00:06:57.043 "bdev_name": "Malloc1" 00:06:57.043 } 00:06:57.043 ]' 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:57.043 /dev/nbd1' 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:57.043 /dev/nbd1' 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:57.043 256+0 records in 00:06:57.043 256+0 records out 00:06:57.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115552 s, 90.7 MB/s 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:57.043 256+0 records in 00:06:57.043 256+0 records out 00:06:57.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158401 s, 66.2 MB/s 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:57.043 256+0 records in 00:06:57.043 256+0 records out 00:06:57.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184425 s, 56.9 MB/s 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.043 15:12:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.301 15:12:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.302 15:12:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.302 15:12:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.561 15:12:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.561 15:12:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.561 15:12:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.561 15:12:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.561 15:12:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.561 15:12:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.561 15:12:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:57.561 15:12:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.561 15:12:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.561 15:12:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.561 15:12:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.561 15:12:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.561 15:12:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:57.821 15:12:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:57.821 [2024-07-15 15:12:07.357765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.821 [2024-07-15 15:12:07.419360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.821 [2024-07-15 15:12:07.419366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.079 [2024-07-15 15:12:07.451498] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:58.079 [2024-07-15 15:12:07.451532] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:00.632 15:12:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:00.632 15:12:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:00.632 spdk_app_start Round 2 00:07:00.632 15:12:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 485599 /var/tmp/spdk-nbd.sock 00:07:00.632 15:12:10 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 485599 ']' 00:07:00.632 15:12:10 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.632 15:12:10 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.632 15:12:10 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.633 15:12:10 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.633 15:12:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:00.893 15:12:10 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.893 15:12:10 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:00.893 15:12:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.153 Malloc0 00:07:01.153 15:12:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.153 Malloc1 00:07:01.153 15:12:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.153 15:12:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.153 15:12:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.153 15:12:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.153 15:12:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.153 15:12:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.153 15:12:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.153 15:12:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.153 15:12:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.153 15:12:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.153 15:12:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.153 15:12:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.153 15:12:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:01.153 15:12:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.153 15:12:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.153 15:12:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:01.415 /dev/nbd0 00:07:01.415 15:12:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:01.415 15:12:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:01.415 15:12:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:01.415 15:12:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:01.415 15:12:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:01.415 15:12:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:01.415 15:12:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:01.415 15:12:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:01.415 15:12:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:01.415 15:12:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:01.415 15:12:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.415 1+0 records in 00:07:01.415 1+0 records out 00:07:01.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000149678 s, 27.4 MB/s 00:07:01.415 15:12:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.415 15:12:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:01.415 15:12:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.415 15:12:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:01.415 15:12:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:01.415 15:12:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.415 15:12:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.415 15:12:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:01.676 /dev/nbd1 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:01.676 15:12:11 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:01.676 15:12:11 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:01.676 15:12:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:01.676 15:12:11 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:01.676 15:12:11 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:01.676 15:12:11 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:01.676 15:12:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:01.676 15:12:11 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:01.676 15:12:11 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.676 1+0 records in 00:07:01.676 1+0 records out 00:07:01.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272554 s, 15.0 MB/s 00:07:01.676 15:12:11 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.676 15:12:11 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:01.676 15:12:11 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:01.676 15:12:11 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:01.676 15:12:11 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:01.676 { 00:07:01.676 "nbd_device": "/dev/nbd0", 00:07:01.676 "bdev_name": "Malloc0" 00:07:01.676 }, 00:07:01.676 { 00:07:01.676 "nbd_device": "/dev/nbd1", 00:07:01.676 "bdev_name": "Malloc1" 00:07:01.676 } 00:07:01.676 ]' 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:01.676 { 00:07:01.676 "nbd_device": "/dev/nbd0", 00:07:01.676 "bdev_name": "Malloc0" 00:07:01.676 }, 00:07:01.676 { 00:07:01.676 "nbd_device": "/dev/nbd1", 00:07:01.676 "bdev_name": "Malloc1" 00:07:01.676 } 00:07:01.676 ]' 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:01.676 /dev/nbd1' 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:01.676 /dev/nbd1' 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:01.676 15:12:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:01.936 256+0 records in 00:07:01.936 256+0 records out 00:07:01.936 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124945 s, 83.9 MB/s 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:01.936 256+0 records in 00:07:01.936 256+0 records out 00:07:01.936 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160895 s, 65.2 MB/s 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:01.936 256+0 records in 00:07:01.936 256+0 records out 00:07:01.936 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0330096 s, 31.8 MB/s 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.936 15:12:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.195 15:12:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.196 15:12:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.196 15:12:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.196 15:12:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.196 15:12:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.196 15:12:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.196 15:12:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.196 15:12:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.196 15:12:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.196 15:12:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.196 15:12:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.455 15:12:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.455 15:12:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.455 15:12:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.455 15:12:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.455 15:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.455 15:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.455 15:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:02.455 15:12:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.455 15:12:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.455 15:12:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:02.455 15:12:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:02.455 15:12:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:02.455 15:12:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:02.714 15:12:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:02.714 [2024-07-15 15:12:12.227003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.714 [2024-07-15 15:12:12.289811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.714 [2024-07-15 15:12:12.289816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.714 [2024-07-15 15:12:12.321171] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:02.714 [2024-07-15 15:12:12.321205] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:06.010 15:12:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 485599 /var/tmp/spdk-nbd.sock 00:07:06.010 15:12:15 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 485599 ']' 00:07:06.010 15:12:15 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.010 15:12:15 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.010 15:12:15 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.010 15:12:15 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.010 15:12:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.010 15:12:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.010 15:12:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:06.010 15:12:15 event.app_repeat -- event/event.sh@39 -- # killprocess 485599 00:07:06.010 15:12:15 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 485599 ']' 00:07:06.010 15:12:15 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 485599 00:07:06.010 15:12:15 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:06.010 15:12:15 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.010 15:12:15 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 485599 00:07:06.010 15:12:15 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.010 15:12:15 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.011 15:12:15 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 485599' 00:07:06.011 killing process with pid 485599 00:07:06.011 15:12:15 event.app_repeat -- common/autotest_common.sh@967 -- # kill 485599 00:07:06.011 15:12:15 event.app_repeat -- common/autotest_common.sh@972 -- # wait 485599 00:07:06.011 spdk_app_start is called in Round 0. 00:07:06.011 Shutdown signal received, stop current app iteration 00:07:06.011 Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 reinitialization... 00:07:06.011 spdk_app_start is called in Round 1. 00:07:06.011 Shutdown signal received, stop current app iteration 00:07:06.011 Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 reinitialization... 00:07:06.011 spdk_app_start is called in Round 2. 00:07:06.011 Shutdown signal received, stop current app iteration 00:07:06.011 Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 reinitialization... 00:07:06.011 spdk_app_start is called in Round 3. 00:07:06.011 Shutdown signal received, stop current app iteration 00:07:06.011 15:12:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:06.011 15:12:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:06.011 00:07:06.011 real 0m15.573s 00:07:06.011 user 0m33.555s 00:07:06.011 sys 0m2.173s 00:07:06.011 15:12:15 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.011 15:12:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.011 ************************************ 00:07:06.011 END TEST app_repeat 00:07:06.011 ************************************ 00:07:06.011 15:12:15 event -- common/autotest_common.sh@1142 -- # return 0 00:07:06.011 15:12:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:06.011 15:12:15 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:06.011 15:12:15 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.011 15:12:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.011 15:12:15 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.011 ************************************ 00:07:06.011 START TEST cpu_locks 00:07:06.011 ************************************ 00:07:06.011 15:12:15 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:06.011 * Looking for test storage... 00:07:06.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:06.011 15:12:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:06.011 15:12:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:06.011 15:12:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:06.011 15:12:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:06.011 15:12:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.011 15:12:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.011 15:12:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.011 ************************************ 00:07:06.011 START TEST default_locks 00:07:06.011 ************************************ 00:07:06.011 15:12:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:06.011 15:12:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=489430 00:07:06.011 15:12:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 489430 00:07:06.011 15:12:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.011 15:12:15 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 489430 ']' 00:07:06.011 15:12:15 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.011 15:12:15 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.011 15:12:15 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.011 15:12:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.011 15:12:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.271 [2024-07-15 15:12:15.672121] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:06.271 [2024-07-15 15:12:15.672169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489430 ] 00:07:06.271 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.271 [2024-07-15 15:12:15.736904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.271 [2024-07-15 15:12:15.800981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.540 15:12:15 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.541 15:12:15 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:06.541 15:12:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 489430 00:07:06.541 15:12:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 489430 00:07:06.541 15:12:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:06.800 lslocks: write error 00:07:06.800 15:12:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 489430 00:07:06.800 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 489430 ']' 00:07:06.800 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 489430 00:07:06.800 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:06.800 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.800 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 489430 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 489430' 00:07:07.061 killing process with pid 489430 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 489430 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 489430 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 489430 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 489430 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 489430 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 489430 ']' 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (489430) - No such process 00:07:07.061 ERROR: process (pid: 489430) is no longer running 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:07.061 00:07:07.061 real 0m1.046s 00:07:07.061 user 0m1.077s 00:07:07.061 sys 0m0.458s 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.061 15:12:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.061 ************************************ 00:07:07.061 END TEST default_locks 00:07:07.061 ************************************ 00:07:07.322 15:12:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:07.322 15:12:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:07.322 15:12:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.322 15:12:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.322 15:12:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.322 ************************************ 00:07:07.322 START TEST default_locks_via_rpc 00:07:07.322 ************************************ 00:07:07.322 15:12:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:07.322 15:12:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=489796 00:07:07.322 15:12:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 489796 00:07:07.322 15:12:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.322 15:12:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 489796 ']' 00:07:07.322 15:12:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.322 15:12:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.322 15:12:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.322 15:12:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.322 15:12:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.322 [2024-07-15 15:12:16.795182] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:07.322 [2024-07-15 15:12:16.795236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489796 ] 00:07:07.322 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.322 [2024-07-15 15:12:16.857899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.322 [2024-07-15 15:12:16.925365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.264 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.264 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:08.264 15:12:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:08.264 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.264 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.264 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.264 15:12:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:08.264 15:12:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:08.265 15:12:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:08.265 15:12:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:08.265 15:12:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:08.265 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.265 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.265 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.265 15:12:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 489796 00:07:08.265 15:12:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 489796 00:07:08.265 15:12:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.526 15:12:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 489796 00:07:08.526 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 489796 ']' 00:07:08.526 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 489796 00:07:08.526 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:08.526 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.526 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 489796 00:07:08.526 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:08.526 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:08.526 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 489796' 00:07:08.526 killing process with pid 489796 00:07:08.526 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 489796 00:07:08.526 15:12:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 489796 00:07:08.786 00:07:08.786 real 0m1.465s 00:07:08.786 user 0m1.551s 00:07:08.786 sys 0m0.482s 00:07:08.786 15:12:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.786 15:12:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.786 ************************************ 00:07:08.786 END TEST default_locks_via_rpc 00:07:08.786 ************************************ 00:07:08.786 15:12:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:08.786 15:12:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:08.786 15:12:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.786 15:12:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.786 15:12:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.786 ************************************ 00:07:08.786 START TEST non_locking_app_on_locked_coremask 00:07:08.786 ************************************ 00:07:08.786 15:12:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:08.786 15:12:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=490093 00:07:08.786 15:12:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 490093 /var/tmp/spdk.sock 00:07:08.786 15:12:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.786 15:12:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 490093 ']' 00:07:08.786 15:12:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.786 15:12:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.786 15:12:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.786 15:12:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.786 15:12:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.786 [2024-07-15 15:12:18.333646] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:08.786 [2024-07-15 15:12:18.333698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490093 ] 00:07:08.786 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.786 [2024-07-15 15:12:18.396383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.046 [2024-07-15 15:12:18.460563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.618 15:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.618 15:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:09.618 15:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=490173 00:07:09.618 15:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 490173 /var/tmp/spdk2.sock 00:07:09.618 15:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:09.618 15:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 490173 ']' 00:07:09.618 15:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.618 15:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.618 15:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.618 15:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.618 15:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.618 [2024-07-15 15:12:19.148239] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:09.618 [2024-07-15 15:12:19.148305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490173 ] 00:07:09.618 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.878 [2024-07-15 15:12:19.243038] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.878 [2024-07-15 15:12:19.243064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.878 [2024-07-15 15:12:19.374103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.448 15:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.448 15:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:10.448 15:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 490093 00:07:10.448 15:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 490093 00:07:10.448 15:12:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.019 lslocks: write error 00:07:11.019 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 490093 00:07:11.019 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 490093 ']' 00:07:11.019 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 490093 00:07:11.019 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:11.019 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.019 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 490093 00:07:11.019 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.019 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.019 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 490093' 00:07:11.019 killing process with pid 490093 00:07:11.019 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 490093 00:07:11.019 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 490093 00:07:11.590 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 490173 00:07:11.590 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 490173 ']' 00:07:11.590 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 490173 00:07:11.590 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:11.590 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.590 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 490173 00:07:11.590 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.590 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.590 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 490173' 00:07:11.590 killing process with pid 490173 00:07:11.590 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 490173 00:07:11.590 15:12:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 490173 00:07:11.590 00:07:11.590 real 0m2.903s 00:07:11.590 user 0m3.152s 00:07:11.590 sys 0m0.885s 00:07:11.590 15:12:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.590 15:12:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.590 ************************************ 00:07:11.590 END TEST non_locking_app_on_locked_coremask 00:07:11.590 ************************************ 00:07:11.851 15:12:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:11.851 15:12:21 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:11.851 15:12:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.851 15:12:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.851 15:12:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.851 ************************************ 00:07:11.851 START TEST locking_app_on_unlocked_coremask 00:07:11.851 ************************************ 00:07:11.851 15:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:11.851 15:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=490591 00:07:11.851 15:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 490591 /var/tmp/spdk.sock 00:07:11.851 15:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:11.851 15:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 490591 ']' 00:07:11.851 15:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.851 15:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.851 15:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.851 15:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.851 15:12:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.851 [2024-07-15 15:12:21.309836] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:11.851 [2024-07-15 15:12:21.309882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490591 ] 00:07:11.851 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.851 [2024-07-15 15:12:21.374097] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.851 [2024-07-15 15:12:21.374131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.851 [2024-07-15 15:12:21.439426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.792 15:12:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.792 15:12:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:12.792 15:12:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=490879 00:07:12.792 15:12:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 490879 /var/tmp/spdk2.sock 00:07:12.792 15:12:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 490879 ']' 00:07:12.792 15:12:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:12.792 15:12:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.792 15:12:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.792 15:12:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.792 15:12:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.792 15:12:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.792 [2024-07-15 15:12:22.121447] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:12.792 [2024-07-15 15:12:22.121500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid490879 ] 00:07:12.792 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.792 [2024-07-15 15:12:22.217999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.792 [2024-07-15 15:12:22.347479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.413 15:12:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.413 15:12:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:13.413 15:12:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 490879 00:07:13.413 15:12:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 490879 00:07:13.413 15:12:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.981 lslocks: write error 00:07:13.981 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 490591 00:07:13.981 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 490591 ']' 00:07:13.981 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 490591 00:07:13.981 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:13.981 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.981 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 490591 00:07:13.981 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.981 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.981 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 490591' 00:07:13.981 killing process with pid 490591 00:07:13.981 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 490591 00:07:13.981 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 490591 00:07:14.550 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 490879 00:07:14.550 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 490879 ']' 00:07:14.550 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 490879 00:07:14.550 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:14.550 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.550 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 490879 00:07:14.550 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.550 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.550 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 490879' 00:07:14.551 killing process with pid 490879 00:07:14.551 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 490879 00:07:14.551 15:12:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 490879 00:07:14.811 00:07:14.811 real 0m2.936s 00:07:14.811 user 0m3.190s 00:07:14.811 sys 0m0.849s 00:07:14.811 15:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.811 15:12:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.811 ************************************ 00:07:14.811 END TEST locking_app_on_unlocked_coremask 00:07:14.811 ************************************ 00:07:14.811 15:12:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:14.811 15:12:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:14.811 15:12:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.811 15:12:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.811 15:12:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.811 ************************************ 00:07:14.811 START TEST locking_app_on_locked_coremask 00:07:14.811 ************************************ 00:07:14.811 15:12:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:14.811 15:12:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=491259 00:07:14.811 15:12:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 491259 /var/tmp/spdk.sock 00:07:14.811 15:12:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.811 15:12:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 491259 ']' 00:07:14.811 15:12:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.811 15:12:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.811 15:12:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.811 15:12:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.811 15:12:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.811 [2024-07-15 15:12:24.319723] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:14.811 [2024-07-15 15:12:24.319777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491259 ] 00:07:14.811 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.811 [2024-07-15 15:12:24.383466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.071 [2024-07-15 15:12:24.451404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=491572 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 491572 /var/tmp/spdk2.sock 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 491572 /var/tmp/spdk2.sock 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 491572 /var/tmp/spdk2.sock 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 491572 ']' 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.641 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.641 [2024-07-15 15:12:25.113155] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:15.641 [2024-07-15 15:12:25.113206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491572 ] 00:07:15.641 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.641 [2024-07-15 15:12:25.207122] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 491259 has claimed it. 00:07:15.641 [2024-07-15 15:12:25.207163] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:16.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (491572) - No such process 00:07:16.211 ERROR: process (pid: 491572) is no longer running 00:07:16.211 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.211 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:16.211 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:16.211 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:16.211 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:16.211 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:16.211 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 491259 00:07:16.211 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 491259 00:07:16.211 15:12:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.782 lslocks: write error 00:07:16.782 15:12:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 491259 00:07:16.782 15:12:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 491259 ']' 00:07:16.782 15:12:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 491259 00:07:16.782 15:12:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:16.782 15:12:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:16.782 15:12:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 491259 00:07:16.782 15:12:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:16.782 15:12:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:16.782 15:12:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 491259' 00:07:16.782 killing process with pid 491259 00:07:16.782 15:12:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 491259 00:07:16.782 15:12:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 491259 00:07:17.042 00:07:17.042 real 0m2.195s 00:07:17.042 user 0m2.413s 00:07:17.042 sys 0m0.585s 00:07:17.042 15:12:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.042 15:12:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.042 ************************************ 00:07:17.042 END TEST locking_app_on_locked_coremask 00:07:17.042 ************************************ 00:07:17.042 15:12:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:17.042 15:12:26 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:17.042 15:12:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.042 15:12:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.042 15:12:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.042 ************************************ 00:07:17.042 START TEST locking_overlapped_coremask 00:07:17.042 ************************************ 00:07:17.042 15:12:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:17.042 15:12:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=491816 00:07:17.042 15:12:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 491816 /var/tmp/spdk.sock 00:07:17.042 15:12:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:17.042 15:12:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 491816 ']' 00:07:17.042 15:12:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.042 15:12:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.042 15:12:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.042 15:12:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.042 15:12:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.042 [2024-07-15 15:12:26.585902] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:17.042 [2024-07-15 15:12:26.585948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491816 ] 00:07:17.042 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.042 [2024-07-15 15:12:26.649275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.302 [2024-07-15 15:12:26.716251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.302 [2024-07-15 15:12:26.716373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.302 [2024-07-15 15:12:26.716376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.873 15:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=491968 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 491968 /var/tmp/spdk2.sock 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 491968 /var/tmp/spdk2.sock 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 491968 /var/tmp/spdk2.sock 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 491968 ']' 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.874 15:12:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.874 [2024-07-15 15:12:27.400630] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:17.874 [2024-07-15 15:12:27.400681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491968 ] 00:07:17.874 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.874 [2024-07-15 15:12:27.478054] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 491816 has claimed it. 00:07:17.874 [2024-07-15 15:12:27.478088] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:18.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (491968) - No such process 00:07:18.445 ERROR: process (pid: 491968) is no longer running 00:07:18.445 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:18.445 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:18.445 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:18.445 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.445 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.445 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.445 15:12:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:18.445 15:12:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:18.445 15:12:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:18.445 15:12:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:18.445 15:12:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 491816 00:07:18.445 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 491816 ']' 00:07:18.445 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 491816 00:07:18.445 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:18.445 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:18.445 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 491816 00:07:18.706 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:18.706 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:18.706 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 491816' 00:07:18.706 killing process with pid 491816 00:07:18.706 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 491816 00:07:18.706 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 491816 00:07:18.706 00:07:18.706 real 0m1.751s 00:07:18.706 user 0m4.953s 00:07:18.706 sys 0m0.361s 00:07:18.706 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.706 15:12:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:18.706 ************************************ 00:07:18.706 END TEST locking_overlapped_coremask 00:07:18.706 ************************************ 00:07:18.706 15:12:28 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:18.706 15:12:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:18.706 15:12:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.706 15:12:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.706 15:12:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.967 ************************************ 00:07:18.967 START TEST locking_overlapped_coremask_via_rpc 00:07:18.967 ************************************ 00:07:18.967 15:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:18.967 15:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=492278 00:07:18.967 15:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 492278 /var/tmp/spdk.sock 00:07:18.967 15:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:18.967 15:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 492278 ']' 00:07:18.967 15:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.967 15:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.967 15:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.967 15:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.967 15:12:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.967 [2024-07-15 15:12:28.414731] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:18.967 [2024-07-15 15:12:28.414779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492278 ] 00:07:18.967 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.967 [2024-07-15 15:12:28.478107] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.967 [2024-07-15 15:12:28.478136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.967 [2024-07-15 15:12:28.544494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.967 [2024-07-15 15:12:28.544607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.967 [2024-07-15 15:12:28.544610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.910 15:12:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.910 15:12:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:19.910 15:12:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:19.910 15:12:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=492344 00:07:19.910 15:12:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 492344 /var/tmp/spdk2.sock 00:07:19.910 15:12:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 492344 ']' 00:07:19.910 15:12:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.910 15:12:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.910 15:12:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.910 15:12:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.910 15:12:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.910 [2024-07-15 15:12:29.225643] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:19.910 [2024-07-15 15:12:29.225693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492344 ] 00:07:19.910 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.910 [2024-07-15 15:12:29.298827] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:19.910 [2024-07-15 15:12:29.298853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.910 [2024-07-15 15:12:29.411646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.910 [2024-07-15 15:12:29.411794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.910 [2024-07-15 15:12:29.411797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:20.483 15:12:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.483 15:12:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:20.483 15:12:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:20.483 15:12:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.483 15:12:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.483 [2024-07-15 15:12:30.016951] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 492278 has claimed it. 00:07:20.483 request: 00:07:20.483 { 00:07:20.483 "method": "framework_enable_cpumask_locks", 00:07:20.483 "req_id": 1 00:07:20.483 } 00:07:20.483 Got JSON-RPC error response 00:07:20.483 response: 00:07:20.483 { 00:07:20.483 "code": -32603, 00:07:20.483 "message": "Failed to claim CPU core: 2" 00:07:20.483 } 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 492278 /var/tmp/spdk.sock 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 492278 ']' 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.483 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.743 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.743 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:20.743 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 492344 /var/tmp/spdk2.sock 00:07:20.744 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 492344 ']' 00:07:20.744 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.744 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.744 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.744 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.744 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.744 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.744 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:20.744 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:20.744 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:20.744 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:20.744 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:20.744 00:07:20.744 real 0m1.992s 00:07:20.744 user 0m0.788s 00:07:20.744 sys 0m0.131s 00:07:20.744 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.744 15:12:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.744 ************************************ 00:07:20.744 END TEST locking_overlapped_coremask_via_rpc 00:07:20.744 ************************************ 00:07:21.004 15:12:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:21.004 15:12:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:21.004 15:12:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 492278 ]] 00:07:21.004 15:12:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 492278 00:07:21.004 15:12:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 492278 ']' 00:07:21.004 15:12:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 492278 00:07:21.004 15:12:30 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:21.004 15:12:30 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.004 15:12:30 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 492278 00:07:21.004 15:12:30 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.004 15:12:30 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.004 15:12:30 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 492278' 00:07:21.004 killing process with pid 492278 00:07:21.004 15:12:30 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 492278 00:07:21.004 15:12:30 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 492278 00:07:21.263 15:12:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 492344 ]] 00:07:21.263 15:12:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 492344 00:07:21.263 15:12:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 492344 ']' 00:07:21.263 15:12:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 492344 00:07:21.263 15:12:30 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:21.263 15:12:30 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.263 15:12:30 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 492344 00:07:21.263 15:12:30 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:21.263 15:12:30 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:21.263 15:12:30 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 492344' 00:07:21.263 killing process with pid 492344 00:07:21.263 15:12:30 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 492344 00:07:21.263 15:12:30 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 492344 00:07:21.523 15:12:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:21.523 15:12:30 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:21.523 15:12:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 492278 ]] 00:07:21.523 15:12:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 492278 00:07:21.523 15:12:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 492278 ']' 00:07:21.523 15:12:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 492278 00:07:21.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (492278) - No such process 00:07:21.523 15:12:30 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 492278 is not found' 00:07:21.523 Process with pid 492278 is not found 00:07:21.523 15:12:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 492344 ]] 00:07:21.523 15:12:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 492344 00:07:21.523 15:12:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 492344 ']' 00:07:21.523 15:12:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 492344 00:07:21.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (492344) - No such process 00:07:21.523 15:12:30 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 492344 is not found' 00:07:21.523 Process with pid 492344 is not found 00:07:21.523 15:12:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:21.523 00:07:21.523 real 0m15.429s 00:07:21.523 user 0m26.625s 00:07:21.523 sys 0m4.619s 00:07:21.523 15:12:30 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.523 15:12:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.523 ************************************ 00:07:21.523 END TEST cpu_locks 00:07:21.523 ************************************ 00:07:21.523 15:12:30 event -- common/autotest_common.sh@1142 -- # return 0 00:07:21.523 00:07:21.523 real 0m40.322s 00:07:21.523 user 1m17.095s 00:07:21.523 sys 0m7.751s 00:07:21.523 15:12:30 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.523 15:12:30 event -- common/autotest_common.sh@10 -- # set +x 00:07:21.523 ************************************ 00:07:21.523 END TEST event 00:07:21.523 ************************************ 00:07:21.523 15:12:30 -- common/autotest_common.sh@1142 -- # return 0 00:07:21.523 15:12:30 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:21.523 15:12:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.523 15:12:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.523 15:12:30 -- common/autotest_common.sh@10 -- # set +x 00:07:21.523 ************************************ 00:07:21.523 START TEST thread 00:07:21.523 ************************************ 00:07:21.523 15:12:31 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:21.523 * Looking for test storage... 00:07:21.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:21.523 15:12:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:21.523 15:12:31 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:21.523 15:12:31 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.523 15:12:31 thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.523 ************************************ 00:07:21.523 START TEST thread_poller_perf 00:07:21.523 ************************************ 00:07:21.523 15:12:31 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:21.523 [2024-07-15 15:12:31.138337] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:21.523 [2024-07-15 15:12:31.138386] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492783 ] 00:07:21.783 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.783 [2024-07-15 15:12:31.200384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.783 [2024-07-15 15:12:31.265304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.783 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:22.722 ====================================== 00:07:22.722 busy:2414362562 (cyc) 00:07:22.722 total_run_count: 286000 00:07:22.722 tsc_hz: 2400000000 (cyc) 00:07:22.722 ====================================== 00:07:22.722 poller_cost: 8441 (cyc), 3517 (nsec) 00:07:22.722 00:07:22.722 real 0m1.198s 00:07:22.722 user 0m1.134s 00:07:22.722 sys 0m0.059s 00:07:22.722 15:12:32 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.722 15:12:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:22.722 ************************************ 00:07:22.722 END TEST thread_poller_perf 00:07:22.722 ************************************ 00:07:22.988 15:12:32 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:22.988 15:12:32 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:22.988 15:12:32 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:22.988 15:12:32 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.988 15:12:32 thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.988 ************************************ 00:07:22.988 START TEST thread_poller_perf 00:07:22.988 ************************************ 00:07:22.988 15:12:32 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:22.988 [2024-07-15 15:12:32.406881] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:22.988 [2024-07-15 15:12:32.406934] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493133 ] 00:07:22.988 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.988 [2024-07-15 15:12:32.468387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.988 [2024-07-15 15:12:32.532406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.988 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:24.369 ====================================== 00:07:24.369 busy:2401789260 (cyc) 00:07:24.369 total_run_count: 3799000 00:07:24.369 tsc_hz: 2400000000 (cyc) 00:07:24.369 ====================================== 00:07:24.369 poller_cost: 632 (cyc), 263 (nsec) 00:07:24.369 00:07:24.369 real 0m1.187s 00:07:24.369 user 0m1.121s 00:07:24.369 sys 0m0.062s 00:07:24.369 15:12:33 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.370 15:12:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:24.370 ************************************ 00:07:24.370 END TEST thread_poller_perf 00:07:24.370 ************************************ 00:07:24.370 15:12:33 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:24.370 15:12:33 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:24.370 00:07:24.370 real 0m2.596s 00:07:24.370 user 0m2.333s 00:07:24.370 sys 0m0.261s 00:07:24.370 15:12:33 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.370 15:12:33 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.370 ************************************ 00:07:24.370 END TEST thread 00:07:24.370 ************************************ 00:07:24.370 15:12:33 -- common/autotest_common.sh@1142 -- # return 0 00:07:24.370 15:12:33 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:24.370 15:12:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:24.370 15:12:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.370 15:12:33 -- common/autotest_common.sh@10 -- # set +x 00:07:24.370 ************************************ 00:07:24.370 START TEST accel 00:07:24.370 ************************************ 00:07:24.370 15:12:33 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:24.370 * Looking for test storage... 00:07:24.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:24.370 15:12:33 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:24.370 15:12:33 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:24.370 15:12:33 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:24.370 15:12:33 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=493521 00:07:24.370 15:12:33 accel -- accel/accel.sh@63 -- # waitforlisten 493521 00:07:24.370 15:12:33 accel -- common/autotest_common.sh@829 -- # '[' -z 493521 ']' 00:07:24.370 15:12:33 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.370 15:12:33 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:24.370 15:12:33 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.370 15:12:33 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:24.370 15:12:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.370 15:12:33 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:24.370 15:12:33 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:24.370 15:12:33 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.370 15:12:33 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.370 15:12:33 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.370 15:12:33 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.370 15:12:33 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.370 15:12:33 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:24.370 15:12:33 accel -- accel/accel.sh@41 -- # jq -r . 00:07:24.370 [2024-07-15 15:12:33.822761] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:24.370 [2024-07-15 15:12:33.822817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493521 ] 00:07:24.370 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.370 [2024-07-15 15:12:33.887942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.370 [2024-07-15 15:12:33.958472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.312 15:12:34 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:25.312 15:12:34 accel -- common/autotest_common.sh@862 -- # return 0 00:07:25.312 15:12:34 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:25.312 15:12:34 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:25.312 15:12:34 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:25.312 15:12:34 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:25.312 15:12:34 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:25.312 15:12:34 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:25.312 15:12:34 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:25.312 15:12:34 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.312 15:12:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.312 15:12:34 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.312 15:12:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:25.312 15:12:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:25.312 15:12:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:25.312 15:12:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:25.312 15:12:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:25.312 15:12:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:25.312 15:12:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:25.312 15:12:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:25.312 15:12:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:25.312 15:12:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:25.312 15:12:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:25.312 15:12:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:25.312 15:12:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:25.312 15:12:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:25.312 15:12:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:25.312 15:12:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:25.312 15:12:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:25.312 15:12:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:25.312 15:12:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:25.313 15:12:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.313 15:12:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:25.313 15:12:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:25.313 15:12:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:25.313 15:12:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.313 15:12:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:25.313 15:12:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:25.313 15:12:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:25.313 15:12:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.313 15:12:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:25.313 15:12:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:25.313 15:12:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:25.313 15:12:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.313 15:12:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:25.313 15:12:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:25.313 15:12:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:25.313 15:12:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.313 15:12:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:25.313 15:12:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:25.313 15:12:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:25.313 15:12:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:25.313 15:12:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:25.313 15:12:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:25.313 15:12:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:25.313 15:12:34 accel -- accel/accel.sh@75 -- # killprocess 493521 00:07:25.313 15:12:34 accel -- common/autotest_common.sh@948 -- # '[' -z 493521 ']' 00:07:25.313 15:12:34 accel -- common/autotest_common.sh@952 -- # kill -0 493521 00:07:25.313 15:12:34 accel -- common/autotest_common.sh@953 -- # uname 00:07:25.313 15:12:34 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.313 15:12:34 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 493521 00:07:25.313 15:12:34 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:25.313 15:12:34 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:25.313 15:12:34 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 493521' 00:07:25.313 killing process with pid 493521 00:07:25.313 15:12:34 accel -- common/autotest_common.sh@967 -- # kill 493521 00:07:25.313 15:12:34 accel -- common/autotest_common.sh@972 -- # wait 493521 00:07:25.313 15:12:34 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:25.313 15:12:34 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:25.313 15:12:34 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:25.313 15:12:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.313 15:12:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.313 15:12:34 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:25.313 15:12:34 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:25.313 15:12:34 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:25.313 15:12:34 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.313 15:12:34 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.313 15:12:34 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.313 15:12:34 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.313 15:12:34 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.313 15:12:34 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:25.313 15:12:34 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:25.313 15:12:34 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.573 15:12:34 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:25.573 15:12:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.573 15:12:34 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:25.573 15:12:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:25.573 15:12:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.573 15:12:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.573 ************************************ 00:07:25.573 START TEST accel_missing_filename 00:07:25.573 ************************************ 00:07:25.573 15:12:35 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:25.573 15:12:35 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:25.573 15:12:35 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:25.573 15:12:35 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:25.573 15:12:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.573 15:12:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:25.573 15:12:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.573 15:12:35 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:25.573 15:12:35 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:25.573 15:12:35 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:25.573 15:12:35 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.573 15:12:35 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.573 15:12:35 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.573 15:12:35 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.573 15:12:35 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.573 15:12:35 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:25.573 15:12:35 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:25.573 [2024-07-15 15:12:35.037767] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:25.574 [2024-07-15 15:12:35.037860] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493807 ] 00:07:25.574 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.574 [2024-07-15 15:12:35.104351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.574 [2024-07-15 15:12:35.168705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.834 [2024-07-15 15:12:35.200654] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.834 [2024-07-15 15:12:35.237582] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:25.834 A filename is required. 00:07:25.834 15:12:35 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:25.834 15:12:35 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:25.834 15:12:35 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:25.834 15:12:35 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:25.834 15:12:35 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:25.834 15:12:35 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:25.834 00:07:25.834 real 0m0.285s 00:07:25.834 user 0m0.201s 00:07:25.834 sys 0m0.107s 00:07:25.835 15:12:35 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.835 15:12:35 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:25.835 ************************************ 00:07:25.835 END TEST accel_missing_filename 00:07:25.835 ************************************ 00:07:25.835 15:12:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.835 15:12:35 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.835 15:12:35 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:25.835 15:12:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.835 15:12:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.835 ************************************ 00:07:25.835 START TEST accel_compress_verify 00:07:25.835 ************************************ 00:07:25.835 15:12:35 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.835 15:12:35 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:25.835 15:12:35 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.835 15:12:35 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:25.835 15:12:35 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.835 15:12:35 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:25.835 15:12:35 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.835 15:12:35 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.835 15:12:35 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:25.835 15:12:35 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:25.835 15:12:35 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.835 15:12:35 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.835 15:12:35 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.835 15:12:35 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.835 15:12:35 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.835 15:12:35 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:25.835 15:12:35 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:25.835 [2024-07-15 15:12:35.381564] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:25.835 [2024-07-15 15:12:35.381610] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493917 ] 00:07:25.835 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.835 [2024-07-15 15:12:35.442655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.095 [2024-07-15 15:12:35.506194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.095 [2024-07-15 15:12:35.537943] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.095 [2024-07-15 15:12:35.574449] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:07:26.095 00:07:26.095 Compression does not support the verify option, aborting. 00:07:26.095 15:12:35 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:26.095 15:12:35 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:26.095 15:12:35 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:26.095 15:12:35 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:26.095 15:12:35 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:26.095 15:12:35 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:26.095 00:07:26.095 real 0m0.263s 00:07:26.095 user 0m0.201s 00:07:26.095 sys 0m0.102s 00:07:26.095 15:12:35 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.095 15:12:35 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:26.095 ************************************ 00:07:26.095 END TEST accel_compress_verify 00:07:26.095 ************************************ 00:07:26.095 15:12:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.095 15:12:35 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:26.095 15:12:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:26.095 15:12:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.095 15:12:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.095 ************************************ 00:07:26.095 START TEST accel_wrong_workload 00:07:26.095 ************************************ 00:07:26.095 15:12:35 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:26.095 15:12:35 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:26.095 15:12:35 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:26.095 15:12:35 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:26.095 15:12:35 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.095 15:12:35 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:26.095 15:12:35 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.095 15:12:35 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:26.095 15:12:35 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:26.095 15:12:35 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.095 15:12:35 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:26.095 15:12:35 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.095 15:12:35 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.095 15:12:35 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.095 15:12:35 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.095 15:12:35 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:26.095 15:12:35 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:26.356 Unsupported workload type: foobar 00:07:26.356 [2024-07-15 15:12:35.725560] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:26.356 accel_perf options: 00:07:26.356 [-h help message] 00:07:26.356 [-q queue depth per core] 00:07:26.356 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:26.356 [-T number of threads per core 00:07:26.356 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:26.356 [-t time in seconds] 00:07:26.356 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:26.356 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:26.356 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:26.356 [-l for compress/decompress workloads, name of uncompressed input file 00:07:26.356 [-S for crc32c workload, use this seed value (default 0) 00:07:26.356 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:26.356 [-f for fill workload, use this BYTE value (default 255) 00:07:26.356 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:26.356 [-y verify result if this switch is on] 00:07:26.356 [-a tasks to allocate per core (default: same value as -q)] 00:07:26.356 Can be used to spread operations across a wider range of memory. 00:07:26.356 15:12:35 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:26.356 15:12:35 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:26.356 15:12:35 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:26.356 15:12:35 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:26.356 00:07:26.356 real 0m0.035s 00:07:26.356 user 0m0.043s 00:07:26.356 sys 0m0.015s 00:07:26.356 15:12:35 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.356 15:12:35 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:26.356 ************************************ 00:07:26.356 END TEST accel_wrong_workload 00:07:26.356 ************************************ 00:07:26.356 15:12:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.356 15:12:35 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:26.356 15:12:35 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:26.356 15:12:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.356 15:12:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.356 ************************************ 00:07:26.356 START TEST accel_negative_buffers 00:07:26.356 ************************************ 00:07:26.356 15:12:35 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:26.356 15:12:35 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:26.356 15:12:35 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:26.356 15:12:35 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:26.356 15:12:35 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.356 15:12:35 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:26.356 15:12:35 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:26.356 15:12:35 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:26.356 15:12:35 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:26.356 15:12:35 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:26.356 15:12:35 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.356 15:12:35 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.356 15:12:35 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.356 15:12:35 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.356 15:12:35 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.356 15:12:35 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:26.356 15:12:35 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:26.356 -x option must be non-negative. 00:07:26.356 [2024-07-15 15:12:35.819911] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:26.356 accel_perf options: 00:07:26.356 [-h help message] 00:07:26.356 [-q queue depth per core] 00:07:26.356 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:26.356 [-T number of threads per core 00:07:26.356 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:26.356 [-t time in seconds] 00:07:26.356 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:26.356 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:26.356 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:26.356 [-l for compress/decompress workloads, name of uncompressed input file 00:07:26.356 [-S for crc32c workload, use this seed value (default 0) 00:07:26.356 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:26.356 [-f for fill workload, use this BYTE value (default 255) 00:07:26.356 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:26.356 [-y verify result if this switch is on] 00:07:26.356 [-a tasks to allocate per core (default: same value as -q)] 00:07:26.356 Can be used to spread operations across a wider range of memory. 00:07:26.356 15:12:35 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:26.356 15:12:35 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:26.356 15:12:35 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:26.356 15:12:35 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:26.356 00:07:26.356 real 0m0.019s 00:07:26.356 user 0m0.012s 00:07:26.356 sys 0m0.007s 00:07:26.356 15:12:35 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.356 15:12:35 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:26.356 ************************************ 00:07:26.356 END TEST accel_negative_buffers 00:07:26.356 ************************************ 00:07:26.356 Error: writing output failed: Broken pipe 00:07:26.356 15:12:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.356 15:12:35 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:26.356 15:12:35 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:26.356 15:12:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.356 15:12:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.356 ************************************ 00:07:26.356 START TEST accel_crc32c 00:07:26.356 ************************************ 00:07:26.356 15:12:35 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:26.356 15:12:35 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:26.356 15:12:35 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:26.356 15:12:35 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.356 15:12:35 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.356 15:12:35 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:26.356 15:12:35 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:26.356 15:12:35 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:26.356 15:12:35 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.356 15:12:35 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.356 15:12:35 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.356 15:12:35 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.356 15:12:35 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.356 15:12:35 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:26.356 15:12:35 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:26.356 [2024-07-15 15:12:35.928833] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:26.356 [2024-07-15 15:12:35.928925] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493979 ] 00:07:26.356 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.617 [2024-07-15 15:12:36.006132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.617 [2024-07-15 15:12:36.069890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.617 15:12:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:27.997 15:12:37 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.997 00:07:27.997 real 0m1.298s 00:07:27.997 user 0m1.200s 00:07:27.997 sys 0m0.110s 00:07:27.997 15:12:37 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.997 15:12:37 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:27.997 ************************************ 00:07:27.997 END TEST accel_crc32c 00:07:27.997 ************************************ 00:07:27.997 15:12:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.997 15:12:37 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:27.997 15:12:37 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:27.997 15:12:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.997 15:12:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.997 ************************************ 00:07:27.997 START TEST accel_crc32c_C2 00:07:27.997 ************************************ 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:27.997 [2024-07-15 15:12:37.303713] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:27.997 [2024-07-15 15:12:37.303779] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494334 ] 00:07:27.997 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.997 [2024-07-15 15:12:37.369376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.997 [2024-07-15 15:12:37.435481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.997 15:12:37 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.976 00:07:28.976 real 0m1.289s 00:07:28.976 user 0m1.188s 00:07:28.976 sys 0m0.112s 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.976 15:12:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:28.976 ************************************ 00:07:28.976 END TEST accel_crc32c_C2 00:07:28.976 ************************************ 00:07:29.236 15:12:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.236 15:12:38 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:29.236 15:12:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:29.236 15:12:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.236 15:12:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.236 ************************************ 00:07:29.236 START TEST accel_copy 00:07:29.236 ************************************ 00:07:29.236 15:12:38 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:29.236 [2024-07-15 15:12:38.665666] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:29.236 [2024-07-15 15:12:38.665759] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494697 ] 00:07:29.236 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.236 [2024-07-15 15:12:38.730870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.236 [2024-07-15 15:12:38.795672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.236 15:12:38 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.621 15:12:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.621 15:12:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.621 15:12:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.621 15:12:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.621 15:12:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.621 15:12:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.621 15:12:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.621 15:12:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.621 15:12:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.621 15:12:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.621 15:12:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.621 15:12:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.621 15:12:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.621 15:12:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.621 15:12:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.621 15:12:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.622 15:12:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.622 15:12:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.622 15:12:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:30.622 15:12:39 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.622 00:07:30.622 real 0m1.288s 00:07:30.622 user 0m1.188s 00:07:30.622 sys 0m0.111s 00:07:30.622 15:12:39 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.622 15:12:39 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:30.622 ************************************ 00:07:30.622 END TEST accel_copy 00:07:30.622 ************************************ 00:07:30.622 15:12:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.622 15:12:39 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:30.622 15:12:39 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:30.622 15:12:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.622 15:12:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.622 ************************************ 00:07:30.622 START TEST accel_fill 00:07:30.622 ************************************ 00:07:30.622 15:12:39 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:30.622 15:12:39 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:30.622 15:12:39 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:30.622 15:12:39 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:30.622 15:12:39 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:39 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:39 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:30.622 15:12:39 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:30.622 15:12:39 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.622 15:12:39 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.622 15:12:39 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.622 15:12:39 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.622 15:12:39 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.622 15:12:39 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:30.622 15:12:39 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:30.622 [2024-07-15 15:12:40.008381] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:30.622 [2024-07-15 15:12:40.008431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494965 ] 00:07:30.622 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.622 [2024-07-15 15:12:40.070659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.622 [2024-07-15 15:12:40.135449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.622 15:12:40 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:32.005 15:12:41 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.005 00:07:32.005 real 0m1.270s 00:07:32.005 user 0m1.186s 00:07:32.006 sys 0m0.095s 00:07:32.006 15:12:41 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.006 15:12:41 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:32.006 ************************************ 00:07:32.006 END TEST accel_fill 00:07:32.006 ************************************ 00:07:32.006 15:12:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.006 15:12:41 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:32.006 15:12:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:32.006 15:12:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.006 15:12:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.006 ************************************ 00:07:32.006 START TEST accel_copy_crc32c 00:07:32.006 ************************************ 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:32.006 [2024-07-15 15:12:41.356298] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:32.006 [2024-07-15 15:12:41.356392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid495146 ] 00:07:32.006 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.006 [2024-07-15 15:12:41.421208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.006 [2024-07-15 15:12:41.486053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.006 15:12:41 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.389 00:07:33.389 real 0m1.287s 00:07:33.389 user 0m1.193s 00:07:33.389 sys 0m0.106s 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.389 15:12:42 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:33.389 ************************************ 00:07:33.389 END TEST accel_copy_crc32c 00:07:33.389 ************************************ 00:07:33.389 15:12:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.389 15:12:42 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:33.389 15:12:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:33.389 15:12:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.389 15:12:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.389 ************************************ 00:07:33.389 START TEST accel_copy_crc32c_C2 00:07:33.389 ************************************ 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:33.389 [2024-07-15 15:12:42.712527] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:33.389 [2024-07-15 15:12:42.712590] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid495436 ] 00:07:33.389 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.389 [2024-07-15 15:12:42.777096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.389 [2024-07-15 15:12:42.841668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:33.389 15:12:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.774 00:07:34.774 real 0m1.288s 00:07:34.774 user 0m1.205s 00:07:34.774 sys 0m0.095s 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.774 15:12:43 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:34.774 ************************************ 00:07:34.774 END TEST accel_copy_crc32c_C2 00:07:34.774 ************************************ 00:07:34.774 15:12:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.774 15:12:44 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:34.774 15:12:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:34.774 15:12:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.774 15:12:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.774 ************************************ 00:07:34.774 START TEST accel_dualcast 00:07:34.774 ************************************ 00:07:34.774 15:12:44 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:34.774 [2024-07-15 15:12:44.073667] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:34.774 [2024-07-15 15:12:44.073735] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid495785 ] 00:07:34.774 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.774 [2024-07-15 15:12:44.139294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.774 [2024-07-15 15:12:44.206334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:34.774 15:12:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.775 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.775 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:34.775 15:12:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:34.775 15:12:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:34.775 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:34.775 15:12:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:35.718 15:12:45 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.718 00:07:35.718 real 0m1.289s 00:07:35.718 user 0m1.196s 00:07:35.718 sys 0m0.103s 00:07:35.718 15:12:45 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.718 15:12:45 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:35.718 ************************************ 00:07:35.718 END TEST accel_dualcast 00:07:35.718 ************************************ 00:07:35.978 15:12:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.978 15:12:45 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:35.978 15:12:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:35.978 15:12:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.978 15:12:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.978 ************************************ 00:07:35.978 START TEST accel_compare 00:07:35.978 ************************************ 00:07:35.978 15:12:45 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:35.978 15:12:45 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:35.978 15:12:45 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:35.978 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:35.978 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:35.978 15:12:45 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:35.978 15:12:45 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:35.978 15:12:45 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:35.978 15:12:45 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.978 15:12:45 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.978 15:12:45 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.978 15:12:45 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.978 15:12:45 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.978 15:12:45 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:35.978 15:12:45 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:35.978 [2024-07-15 15:12:45.435257] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:35.978 [2024-07-15 15:12:45.435321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid496138 ] 00:07:35.978 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.978 [2024-07-15 15:12:45.501307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.978 [2024-07-15 15:12:45.567021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:36.238 15:12:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.179 15:12:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.179 15:12:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.179 15:12:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.179 15:12:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.179 15:12:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.179 15:12:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.179 15:12:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.179 15:12:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.179 15:12:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.179 15:12:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.179 15:12:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.179 15:12:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.179 15:12:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.180 15:12:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.180 15:12:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.180 15:12:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.180 15:12:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.180 15:12:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.180 15:12:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.180 15:12:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.180 15:12:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.180 15:12:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.180 15:12:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.180 15:12:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.180 15:12:46 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.180 15:12:46 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:37.180 15:12:46 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.180 00:07:37.180 real 0m1.289s 00:07:37.180 user 0m1.198s 00:07:37.180 sys 0m0.101s 00:07:37.180 15:12:46 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.180 15:12:46 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:37.180 ************************************ 00:07:37.180 END TEST accel_compare 00:07:37.180 ************************************ 00:07:37.180 15:12:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.180 15:12:46 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:37.180 15:12:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:37.180 15:12:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.180 15:12:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.180 ************************************ 00:07:37.180 START TEST accel_xor 00:07:37.180 ************************************ 00:07:37.180 15:12:46 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:37.180 15:12:46 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:37.180 15:12:46 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:37.180 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.180 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.180 15:12:46 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:37.180 15:12:46 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:37.180 15:12:46 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:37.180 15:12:46 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.180 15:12:46 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.180 15:12:46 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.180 15:12:46 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.180 15:12:46 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.180 15:12:46 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:37.180 15:12:46 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:37.180 [2024-07-15 15:12:46.797321] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:37.180 [2024-07-15 15:12:46.797383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid496474 ] 00:07:37.440 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.440 [2024-07-15 15:12:46.861892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.440 [2024-07-15 15:12:46.925189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.440 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.440 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.440 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.440 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.440 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.440 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.440 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:37.440 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.440 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.441 15:12:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.826 00:07:38.826 real 0m1.286s 00:07:38.826 user 0m1.196s 00:07:38.826 sys 0m0.102s 00:07:38.826 15:12:48 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.826 15:12:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:38.826 ************************************ 00:07:38.826 END TEST accel_xor 00:07:38.826 ************************************ 00:07:38.826 15:12:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.826 15:12:48 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:38.826 15:12:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:38.826 15:12:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.826 15:12:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.826 ************************************ 00:07:38.826 START TEST accel_xor 00:07:38.826 ************************************ 00:07:38.826 15:12:48 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:38.826 [2024-07-15 15:12:48.138960] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:38.826 [2024-07-15 15:12:48.139006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid496661 ] 00:07:38.826 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.826 [2024-07-15 15:12:48.200395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.826 [2024-07-15 15:12:48.264550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.826 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.827 15:12:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:39.768 15:12:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:40.028 15:12:49 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.028 00:07:40.028 real 0m1.268s 00:07:40.028 user 0m1.182s 00:07:40.028 sys 0m0.097s 00:07:40.028 15:12:49 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.028 15:12:49 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:40.028 ************************************ 00:07:40.028 END TEST accel_xor 00:07:40.028 ************************************ 00:07:40.028 15:12:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.028 15:12:49 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:40.028 15:12:49 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:40.028 15:12:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.028 15:12:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.028 ************************************ 00:07:40.028 START TEST accel_dif_verify 00:07:40.028 ************************************ 00:07:40.028 15:12:49 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:40.028 [2024-07-15 15:12:49.475904] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:40.028 [2024-07-15 15:12:49.475952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid496879 ] 00:07:40.028 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.028 [2024-07-15 15:12:49.538174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.028 [2024-07-15 15:12:49.601757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:40.028 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:40.029 15:12:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:41.413 15:12:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.413 00:07:41.413 real 0m1.269s 00:07:41.413 user 0m1.184s 00:07:41.413 sys 0m0.097s 00:07:41.413 15:12:50 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.413 15:12:50 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:41.413 ************************************ 00:07:41.413 END TEST accel_dif_verify 00:07:41.413 ************************************ 00:07:41.413 15:12:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.413 15:12:50 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:41.413 15:12:50 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:41.413 15:12:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.413 15:12:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.413 ************************************ 00:07:41.413 START TEST accel_dif_generate 00:07:41.413 ************************************ 00:07:41.413 15:12:50 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:41.413 [2024-07-15 15:12:50.833730] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:41.413 [2024-07-15 15:12:50.833818] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid497226 ] 00:07:41.413 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.413 [2024-07-15 15:12:50.900369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.413 [2024-07-15 15:12:50.965208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:50 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.413 15:12:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:41.414 15:12:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:42.800 15:12:52 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.800 00:07:42.800 real 0m1.290s 00:07:42.800 user 0m1.198s 00:07:42.800 sys 0m0.105s 00:07:42.800 15:12:52 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.800 15:12:52 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:42.800 ************************************ 00:07:42.800 END TEST accel_dif_generate 00:07:42.800 ************************************ 00:07:42.800 15:12:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.800 15:12:52 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:42.800 15:12:52 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:42.800 15:12:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.800 15:12:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.800 ************************************ 00:07:42.801 START TEST accel_dif_generate_copy 00:07:42.801 ************************************ 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:42.801 [2024-07-15 15:12:52.196081] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:42.801 [2024-07-15 15:12:52.196152] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid497576 ] 00:07:42.801 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.801 [2024-07-15 15:12:52.261077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.801 [2024-07-15 15:12:52.324645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:42.801 15:12:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.187 00:07:44.187 real 0m1.285s 00:07:44.187 user 0m1.195s 00:07:44.187 sys 0m0.102s 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.187 15:12:53 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:44.187 ************************************ 00:07:44.187 END TEST accel_dif_generate_copy 00:07:44.187 ************************************ 00:07:44.187 15:12:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:44.187 15:12:53 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:44.187 15:12:53 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.187 15:12:53 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:44.187 15:12:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.187 15:12:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.187 ************************************ 00:07:44.187 START TEST accel_comp 00:07:44.187 ************************************ 00:07:44.187 15:12:53 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:44.187 [2024-07-15 15:12:53.558935] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:44.187 [2024-07-15 15:12:53.559024] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid497916 ] 00:07:44.187 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.187 [2024-07-15 15:12:53.627105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.187 [2024-07-15 15:12:53.693714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.187 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.188 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.188 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:44.188 15:12:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:44.188 15:12:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:44.188 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:44.188 15:12:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:45.572 15:12:54 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.572 00:07:45.572 real 0m1.297s 00:07:45.572 user 0m1.203s 00:07:45.572 sys 0m0.107s 00:07:45.572 15:12:54 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.572 15:12:54 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:45.572 ************************************ 00:07:45.572 END TEST accel_comp 00:07:45.572 ************************************ 00:07:45.572 15:12:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.572 15:12:54 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:45.572 15:12:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:45.572 15:12:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.572 15:12:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.572 ************************************ 00:07:45.572 START TEST accel_decomp 00:07:45.572 ************************************ 00:07:45.572 15:12:54 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:45.572 15:12:54 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:45.572 15:12:54 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:45.572 15:12:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:54 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:45.572 15:12:54 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:45.572 15:12:54 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:45.572 15:12:54 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.572 15:12:54 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.572 15:12:54 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.572 15:12:54 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.572 15:12:54 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.572 15:12:54 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:45.572 15:12:54 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:45.572 [2024-07-15 15:12:54.925058] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:45.572 [2024-07-15 15:12:54.925122] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid498110 ] 00:07:45.572 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.572 [2024-07-15 15:12:54.991772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.572 [2024-07-15 15:12:55.060123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.572 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:45.573 15:12:55 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:45.573 15:12:55 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:45.573 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:45.573 15:12:55 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:46.600 15:12:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.600 00:07:46.600 real 0m1.296s 00:07:46.600 user 0m1.206s 00:07:46.600 sys 0m0.102s 00:07:46.600 15:12:56 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.600 15:12:56 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:46.600 ************************************ 00:07:46.600 END TEST accel_decomp 00:07:46.600 ************************************ 00:07:46.860 15:12:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:46.860 15:12:56 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:46.860 15:12:56 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:46.860 15:12:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.860 15:12:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.860 ************************************ 00:07:46.860 START TEST accel_decomp_full 00:07:46.860 ************************************ 00:07:46.860 15:12:56 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:46.861 [2024-07-15 15:12:56.294699] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:46.861 [2024-07-15 15:12:56.294795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid498320 ] 00:07:46.861 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.861 [2024-07-15 15:12:56.361925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.861 [2024-07-15 15:12:56.429935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:46.861 15:12:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:48.243 15:12:57 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.243 00:07:48.243 real 0m1.307s 00:07:48.243 user 0m1.207s 00:07:48.243 sys 0m0.111s 00:07:48.243 15:12:57 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.243 15:12:57 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:48.243 ************************************ 00:07:48.243 END TEST accel_decomp_full 00:07:48.243 ************************************ 00:07:48.243 15:12:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:48.243 15:12:57 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:48.243 15:12:57 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:48.243 15:12:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.243 15:12:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.243 ************************************ 00:07:48.243 START TEST accel_decomp_mcore 00:07:48.243 ************************************ 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:48.243 [2024-07-15 15:12:57.656034] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:48.243 [2024-07-15 15:12:57.656078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid498672 ] 00:07:48.243 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.243 [2024-07-15 15:12:57.717101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.243 [2024-07-15 15:12:57.782996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.243 [2024-07-15 15:12:57.783114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.243 [2024-07-15 15:12:57.783273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.243 [2024-07-15 15:12:57.783273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.243 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.244 15:12:57 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.626 00:07:49.626 real 0m1.279s 00:07:49.626 user 0m4.419s 00:07:49.626 sys 0m0.103s 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.626 15:12:58 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:49.626 ************************************ 00:07:49.626 END TEST accel_decomp_mcore 00:07:49.626 ************************************ 00:07:49.626 15:12:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.626 15:12:58 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.626 15:12:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:49.626 15:12:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.626 15:12:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.626 ************************************ 00:07:49.626 START TEST accel_decomp_full_mcore 00:07:49.626 ************************************ 00:07:49.626 15:12:58 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.626 15:12:58 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:49.626 15:12:58 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:49.626 15:12:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:58 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:58 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.626 15:12:58 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:49.626 15:12:58 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:49.626 15:12:58 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:49.626 [2024-07-15 15:12:59.022902] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:49.626 [2024-07-15 15:12:59.022964] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid499024 ] 00:07:49.626 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.626 [2024-07-15 15:12:59.087635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.626 [2024-07-15 15:12:59.154880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.626 [2024-07-15 15:12:59.154996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.626 [2024-07-15 15:12:59.155152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.626 [2024-07-15 15:12:59.155153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.626 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:49.627 15:12:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:51.009 15:13:00 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.009 00:07:51.010 real 0m1.308s 00:07:51.010 user 0m4.456s 00:07:51.010 sys 0m0.126s 00:07:51.010 15:13:00 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.010 15:13:00 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:51.010 ************************************ 00:07:51.010 END TEST accel_decomp_full_mcore 00:07:51.010 ************************************ 00:07:51.010 15:13:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:51.010 15:13:00 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.010 15:13:00 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:51.010 15:13:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.010 15:13:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.010 ************************************ 00:07:51.010 START TEST accel_decomp_mthread 00:07:51.010 ************************************ 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:51.010 [2024-07-15 15:13:00.407493] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:51.010 [2024-07-15 15:13:00.407589] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid499374 ] 00:07:51.010 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.010 [2024-07-15 15:13:00.474540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.010 [2024-07-15 15:13:00.544750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.010 15:13:00 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.391 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.392 00:07:52.392 real 0m1.302s 00:07:52.392 user 0m1.207s 00:07:52.392 sys 0m0.109s 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.392 15:13:01 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:52.392 ************************************ 00:07:52.392 END TEST accel_decomp_mthread 00:07:52.392 ************************************ 00:07:52.392 15:13:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.392 15:13:01 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.392 15:13:01 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:52.392 15:13:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.392 15:13:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.392 ************************************ 00:07:52.392 START TEST accel_decomp_full_mthread 00:07:52.392 ************************************ 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:52.392 [2024-07-15 15:13:01.769541] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:52.392 [2024-07-15 15:13:01.769605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid499608 ] 00:07:52.392 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.392 [2024-07-15 15:13:01.833977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.392 [2024-07-15 15:13:01.899102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:52.392 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:52.393 15:13:01 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.777 00:07:53.777 real 0m1.320s 00:07:53.777 user 0m1.223s 00:07:53.777 sys 0m0.110s 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.777 15:13:03 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:53.777 ************************************ 00:07:53.777 END TEST accel_decomp_full_mthread 00:07:53.777 ************************************ 00:07:53.777 15:13:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:53.777 15:13:03 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:53.777 15:13:03 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:53.777 15:13:03 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:53.777 15:13:03 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:53.777 15:13:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.777 15:13:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.777 15:13:03 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.777 15:13:03 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.777 15:13:03 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.777 15:13:03 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.777 15:13:03 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.777 15:13:03 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:53.777 15:13:03 accel -- accel/accel.sh@41 -- # jq -r . 00:07:53.777 ************************************ 00:07:53.777 START TEST accel_dif_functional_tests 00:07:53.777 ************************************ 00:07:53.777 15:13:03 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:53.777 [2024-07-15 15:13:03.181477] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:53.777 [2024-07-15 15:13:03.181526] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid499812 ] 00:07:53.777 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.777 [2024-07-15 15:13:03.244528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.777 [2024-07-15 15:13:03.312170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.777 [2024-07-15 15:13:03.312287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.777 [2024-07-15 15:13:03.312291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.777 00:07:53.777 00:07:53.777 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.777 http://cunit.sourceforge.net/ 00:07:53.777 00:07:53.777 00:07:53.777 Suite: accel_dif 00:07:53.777 Test: verify: DIF generated, GUARD check ...passed 00:07:53.777 Test: verify: DIF generated, APPTAG check ...passed 00:07:53.777 Test: verify: DIF generated, REFTAG check ...passed 00:07:53.777 Test: verify: DIF not generated, GUARD check ...[2024-07-15 15:13:03.367461] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:53.777 passed 00:07:53.777 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 15:13:03.367505] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:53.777 passed 00:07:53.777 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 15:13:03.367525] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:53.777 passed 00:07:53.777 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:53.777 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 15:13:03.367574] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:53.777 passed 00:07:53.777 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:53.777 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:53.777 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:53.777 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 15:13:03.367690] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:53.777 passed 00:07:53.777 Test: verify copy: DIF generated, GUARD check ...passed 00:07:53.777 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:53.777 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:53.777 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 15:13:03.367816] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:53.777 passed 00:07:53.777 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 15:13:03.367839] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:53.777 passed 00:07:53.777 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 15:13:03.367861] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:53.777 passed 00:07:53.777 Test: generate copy: DIF generated, GUARD check ...passed 00:07:53.777 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:53.777 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:53.777 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:53.777 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:53.777 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:53.777 Test: generate copy: iovecs-len validate ...[2024-07-15 15:13:03.368052] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:53.777 passed 00:07:53.777 Test: generate copy: buffer alignment validate ...passed 00:07:53.777 00:07:53.777 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.777 suites 1 1 n/a 0 0 00:07:53.777 tests 26 26 26 0 0 00:07:53.777 asserts 115 115 115 0 n/a 00:07:53.777 00:07:53.777 Elapsed time = 0.002 seconds 00:07:54.037 00:07:54.037 real 0m0.352s 00:07:54.037 user 0m0.486s 00:07:54.037 sys 0m0.129s 00:07:54.037 15:13:03 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.037 15:13:03 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:54.037 ************************************ 00:07:54.037 END TEST accel_dif_functional_tests 00:07:54.037 ************************************ 00:07:54.037 15:13:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:54.037 00:07:54.037 real 0m29.827s 00:07:54.037 user 0m33.382s 00:07:54.037 sys 0m4.013s 00:07:54.037 15:13:03 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.037 15:13:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.037 ************************************ 00:07:54.037 END TEST accel 00:07:54.037 ************************************ 00:07:54.037 15:13:03 -- common/autotest_common.sh@1142 -- # return 0 00:07:54.037 15:13:03 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:54.037 15:13:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:54.037 15:13:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.037 15:13:03 -- common/autotest_common.sh@10 -- # set +x 00:07:54.037 ************************************ 00:07:54.037 START TEST accel_rpc 00:07:54.037 ************************************ 00:07:54.037 15:13:03 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:54.298 * Looking for test storage... 00:07:54.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:54.298 15:13:03 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:54.298 15:13:03 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=500147 00:07:54.298 15:13:03 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 500147 00:07:54.298 15:13:03 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:54.298 15:13:03 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 500147 ']' 00:07:54.298 15:13:03 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.298 15:13:03 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.298 15:13:03 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.298 15:13:03 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.298 15:13:03 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.298 [2024-07-15 15:13:03.729346] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:54.298 [2024-07-15 15:13:03.729400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid500147 ] 00:07:54.298 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.298 [2024-07-15 15:13:03.796036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.298 [2024-07-15 15:13:03.864618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.237 15:13:04 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:55.237 15:13:04 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:55.237 15:13:04 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:55.237 15:13:04 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:55.237 15:13:04 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:55.237 15:13:04 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:55.237 15:13:04 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:55.237 15:13:04 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.237 15:13:04 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.237 15:13:04 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.237 ************************************ 00:07:55.237 START TEST accel_assign_opcode 00:07:55.237 ************************************ 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:55.237 [2024-07-15 15:13:04.522533] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:55.237 [2024-07-15 15:13:04.530545] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.237 software 00:07:55.237 00:07:55.237 real 0m0.200s 00:07:55.237 user 0m0.045s 00:07:55.237 sys 0m0.008s 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.237 15:13:04 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:55.237 ************************************ 00:07:55.237 END TEST accel_assign_opcode 00:07:55.237 ************************************ 00:07:55.237 15:13:04 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:55.237 15:13:04 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 500147 00:07:55.237 15:13:04 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 500147 ']' 00:07:55.237 15:13:04 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 500147 00:07:55.237 15:13:04 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:55.237 15:13:04 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:55.237 15:13:04 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 500147 00:07:55.237 15:13:04 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:55.237 15:13:04 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:55.237 15:13:04 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 500147' 00:07:55.237 killing process with pid 500147 00:07:55.237 15:13:04 accel_rpc -- common/autotest_common.sh@967 -- # kill 500147 00:07:55.237 15:13:04 accel_rpc -- common/autotest_common.sh@972 -- # wait 500147 00:07:55.497 00:07:55.497 real 0m1.421s 00:07:55.497 user 0m1.502s 00:07:55.497 sys 0m0.388s 00:07:55.497 15:13:05 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.497 15:13:05 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.497 ************************************ 00:07:55.497 END TEST accel_rpc 00:07:55.497 ************************************ 00:07:55.497 15:13:05 -- common/autotest_common.sh@1142 -- # return 0 00:07:55.497 15:13:05 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:55.497 15:13:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:55.497 15:13:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.497 15:13:05 -- common/autotest_common.sh@10 -- # set +x 00:07:55.497 ************************************ 00:07:55.497 START TEST app_cmdline 00:07:55.497 ************************************ 00:07:55.497 15:13:05 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:55.757 * Looking for test storage... 00:07:55.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:55.757 15:13:05 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:55.757 15:13:05 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=500462 00:07:55.757 15:13:05 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 500462 00:07:55.757 15:13:05 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:55.757 15:13:05 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 500462 ']' 00:07:55.757 15:13:05 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.757 15:13:05 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:55.757 15:13:05 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.757 15:13:05 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:55.757 15:13:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:55.757 [2024-07-15 15:13:05.234877] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:07:55.757 [2024-07-15 15:13:05.234936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid500462 ] 00:07:55.757 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.757 [2024-07-15 15:13:05.301080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.757 [2024-07-15 15:13:05.366375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.035 15:13:05 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.035 15:13:05 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:56.035 15:13:05 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:56.296 { 00:07:56.296 "version": "SPDK v24.09-pre git sha1 248c547d0", 00:07:56.296 "fields": { 00:07:56.296 "major": 24, 00:07:56.296 "minor": 9, 00:07:56.296 "patch": 0, 00:07:56.296 "suffix": "-pre", 00:07:56.296 "commit": "248c547d0" 00:07:56.296 } 00:07:56.296 } 00:07:56.296 15:13:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:56.296 15:13:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:56.296 15:13:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:56.296 15:13:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:56.296 15:13:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:56.296 15:13:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:56.296 15:13:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.296 15:13:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:56.296 15:13:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:56.296 15:13:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.296 request: 00:07:56.296 { 00:07:56.296 "method": "env_dpdk_get_mem_stats", 00:07:56.296 "req_id": 1 00:07:56.296 } 00:07:56.296 Got JSON-RPC error response 00:07:56.296 response: 00:07:56.296 { 00:07:56.296 "code": -32601, 00:07:56.296 "message": "Method not found" 00:07:56.296 } 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:56.296 15:13:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 500462 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 500462 ']' 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 500462 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:56.296 15:13:05 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 500462 00:07:56.556 15:13:05 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:56.556 15:13:05 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:56.556 15:13:05 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 500462' 00:07:56.556 killing process with pid 500462 00:07:56.556 15:13:05 app_cmdline -- common/autotest_common.sh@967 -- # kill 500462 00:07:56.556 15:13:05 app_cmdline -- common/autotest_common.sh@972 -- # wait 500462 00:07:56.556 00:07:56.556 real 0m1.074s 00:07:56.556 user 0m1.329s 00:07:56.556 sys 0m0.352s 00:07:56.556 15:13:06 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.556 15:13:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:56.556 ************************************ 00:07:56.556 END TEST app_cmdline 00:07:56.556 ************************************ 00:07:56.817 15:13:06 -- common/autotest_common.sh@1142 -- # return 0 00:07:56.817 15:13:06 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:56.817 15:13:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:56.817 15:13:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.817 15:13:06 -- common/autotest_common.sh@10 -- # set +x 00:07:56.817 ************************************ 00:07:56.817 START TEST version 00:07:56.817 ************************************ 00:07:56.817 15:13:06 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:56.817 * Looking for test storage... 00:07:56.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:56.817 15:13:06 version -- app/version.sh@17 -- # get_header_version major 00:07:56.817 15:13:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:56.817 15:13:06 version -- app/version.sh@14 -- # cut -f2 00:07:56.817 15:13:06 version -- app/version.sh@14 -- # tr -d '"' 00:07:56.817 15:13:06 version -- app/version.sh@17 -- # major=24 00:07:56.817 15:13:06 version -- app/version.sh@18 -- # get_header_version minor 00:07:56.817 15:13:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:56.817 15:13:06 version -- app/version.sh@14 -- # cut -f2 00:07:56.817 15:13:06 version -- app/version.sh@14 -- # tr -d '"' 00:07:56.817 15:13:06 version -- app/version.sh@18 -- # minor=9 00:07:56.817 15:13:06 version -- app/version.sh@19 -- # get_header_version patch 00:07:56.817 15:13:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:56.817 15:13:06 version -- app/version.sh@14 -- # cut -f2 00:07:56.817 15:13:06 version -- app/version.sh@14 -- # tr -d '"' 00:07:56.817 15:13:06 version -- app/version.sh@19 -- # patch=0 00:07:56.817 15:13:06 version -- app/version.sh@20 -- # get_header_version suffix 00:07:56.817 15:13:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:56.817 15:13:06 version -- app/version.sh@14 -- # cut -f2 00:07:56.817 15:13:06 version -- app/version.sh@14 -- # tr -d '"' 00:07:56.817 15:13:06 version -- app/version.sh@20 -- # suffix=-pre 00:07:56.817 15:13:06 version -- app/version.sh@22 -- # version=24.9 00:07:56.817 15:13:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:56.817 15:13:06 version -- app/version.sh@28 -- # version=24.9rc0 00:07:56.817 15:13:06 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:56.817 15:13:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:56.817 15:13:06 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:56.817 15:13:06 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:56.817 00:07:56.817 real 0m0.169s 00:07:56.817 user 0m0.082s 00:07:56.817 sys 0m0.126s 00:07:56.817 15:13:06 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.817 15:13:06 version -- common/autotest_common.sh@10 -- # set +x 00:07:56.817 ************************************ 00:07:56.817 END TEST version 00:07:56.817 ************************************ 00:07:56.817 15:13:06 -- common/autotest_common.sh@1142 -- # return 0 00:07:56.817 15:13:06 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:57.079 15:13:06 -- spdk/autotest.sh@198 -- # uname -s 00:07:57.079 15:13:06 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:57.079 15:13:06 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:57.079 15:13:06 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:57.079 15:13:06 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:57.079 15:13:06 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:57.079 15:13:06 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:57.079 15:13:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:57.079 15:13:06 -- common/autotest_common.sh@10 -- # set +x 00:07:57.079 15:13:06 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:57.079 15:13:06 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:57.079 15:13:06 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:57.079 15:13:06 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:57.079 15:13:06 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:57.079 15:13:06 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:57.079 15:13:06 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:57.079 15:13:06 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:57.079 15:13:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.079 15:13:06 -- common/autotest_common.sh@10 -- # set +x 00:07:57.079 ************************************ 00:07:57.079 START TEST nvmf_tcp 00:07:57.079 ************************************ 00:07:57.079 15:13:06 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:57.079 * Looking for test storage... 00:07:57.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.079 15:13:06 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.079 15:13:06 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.079 15:13:06 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.079 15:13:06 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.079 15:13:06 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.079 15:13:06 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.079 15:13:06 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.079 15:13:06 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:57.080 15:13:06 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.080 15:13:06 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:57.080 15:13:06 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.080 15:13:06 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.080 15:13:06 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.080 15:13:06 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.080 15:13:06 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.080 15:13:06 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.080 15:13:06 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.080 15:13:06 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.080 15:13:06 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:57.080 15:13:06 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:57.080 15:13:06 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:57.080 15:13:06 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:57.080 15:13:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:57.080 15:13:06 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:57.080 15:13:06 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:57.080 15:13:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:57.080 15:13:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.080 15:13:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:57.341 ************************************ 00:07:57.341 START TEST nvmf_example 00:07:57.341 ************************************ 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:57.341 * Looking for test storage... 00:07:57.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.341 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:57.342 15:13:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:05.480 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:05.480 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:05.480 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:05.481 Found net devices under 0000:31:00.0: cvl_0_0 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:05.481 Found net devices under 0000:31:00.1: cvl_0_1 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:05.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:08:05.481 00:08:05.481 --- 10.0.0.2 ping statistics --- 00:08:05.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.481 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:05.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:08:05.481 00:08:05.481 --- 10.0.0.1 ping statistics --- 00:08:05.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.481 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=504877 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 504877 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 504877 ']' 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:05.481 15:13:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:05.481 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:06.053 15:13:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:06.053 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.324 Initializing NVMe Controllers 00:08:18.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:18.324 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:18.324 Initialization complete. Launching workers. 00:08:18.324 ======================================================== 00:08:18.324 Latency(us) 00:08:18.324 Device Information : IOPS MiB/s Average min max 00:08:18.324 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16986.45 66.35 3767.31 643.85 16267.40 00:08:18.324 ======================================================== 00:08:18.324 Total : 16986.45 66.35 3767.31 643.85 16267.40 00:08:18.324 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.324 rmmod nvme_tcp 00:08:18.324 rmmod nvme_fabrics 00:08:18.324 rmmod nvme_keyring 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 504877 ']' 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 504877 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 504877 ']' 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 504877 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 504877 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 504877' 00:08:18.324 killing process with pid 504877 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 504877 00:08:18.324 15:13:25 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 504877 00:08:18.324 nvmf threads initialize successfully 00:08:18.324 bdev subsystem init successfully 00:08:18.324 created a nvmf target service 00:08:18.324 create targets's poll groups done 00:08:18.324 all subsystems of target started 00:08:18.324 nvmf target is running 00:08:18.324 all subsystems of target stopped 00:08:18.324 destroy targets's poll groups done 00:08:18.324 destroyed the nvmf target service 00:08:18.324 bdev subsystem finish successfully 00:08:18.324 nvmf threads destroy successfully 00:08:18.324 15:13:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.324 15:13:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.324 15:13:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.324 15:13:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.324 15:13:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.324 15:13:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.324 15:13:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.324 15:13:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.584 15:13:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:18.584 15:13:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:18.584 15:13:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:18.584 15:13:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:18.584 00:08:18.584 real 0m21.437s 00:08:18.584 user 0m46.824s 00:08:18.584 sys 0m6.743s 00:08:18.584 15:13:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.584 15:13:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:18.584 ************************************ 00:08:18.584 END TEST nvmf_example 00:08:18.584 ************************************ 00:08:18.584 15:13:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:18.584 15:13:28 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:18.584 15:13:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:18.584 15:13:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.584 15:13:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:18.846 ************************************ 00:08:18.846 START TEST nvmf_filesystem 00:08:18.846 ************************************ 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:18.846 * Looking for test storage... 00:08:18.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:18.846 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:18.847 #define SPDK_CONFIG_H 00:08:18.847 #define SPDK_CONFIG_APPS 1 00:08:18.847 #define SPDK_CONFIG_ARCH native 00:08:18.847 #undef SPDK_CONFIG_ASAN 00:08:18.847 #undef SPDK_CONFIG_AVAHI 00:08:18.847 #undef SPDK_CONFIG_CET 00:08:18.847 #define SPDK_CONFIG_COVERAGE 1 00:08:18.847 #define SPDK_CONFIG_CROSS_PREFIX 00:08:18.847 #undef SPDK_CONFIG_CRYPTO 00:08:18.847 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:18.847 #undef SPDK_CONFIG_CUSTOMOCF 00:08:18.847 #undef SPDK_CONFIG_DAOS 00:08:18.847 #define SPDK_CONFIG_DAOS_DIR 00:08:18.847 #define SPDK_CONFIG_DEBUG 1 00:08:18.847 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:18.847 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:18.847 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:18.847 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:18.847 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:18.847 #undef SPDK_CONFIG_DPDK_UADK 00:08:18.847 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:18.847 #define SPDK_CONFIG_EXAMPLES 1 00:08:18.847 #undef SPDK_CONFIG_FC 00:08:18.847 #define SPDK_CONFIG_FC_PATH 00:08:18.847 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:18.847 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:18.847 #undef SPDK_CONFIG_FUSE 00:08:18.847 #undef SPDK_CONFIG_FUZZER 00:08:18.847 #define SPDK_CONFIG_FUZZER_LIB 00:08:18.847 #undef SPDK_CONFIG_GOLANG 00:08:18.847 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:18.847 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:18.847 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:18.847 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:18.847 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:18.847 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:18.847 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:18.847 #define SPDK_CONFIG_IDXD 1 00:08:18.847 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:18.847 #undef SPDK_CONFIG_IPSEC_MB 00:08:18.847 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:18.847 #define SPDK_CONFIG_ISAL 1 00:08:18.847 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:18.847 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:18.847 #define SPDK_CONFIG_LIBDIR 00:08:18.847 #undef SPDK_CONFIG_LTO 00:08:18.847 #define SPDK_CONFIG_MAX_LCORES 128 00:08:18.847 #define SPDK_CONFIG_NVME_CUSE 1 00:08:18.847 #undef SPDK_CONFIG_OCF 00:08:18.847 #define SPDK_CONFIG_OCF_PATH 00:08:18.847 #define SPDK_CONFIG_OPENSSL_PATH 00:08:18.847 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:18.847 #define SPDK_CONFIG_PGO_DIR 00:08:18.847 #undef SPDK_CONFIG_PGO_USE 00:08:18.847 #define SPDK_CONFIG_PREFIX /usr/local 00:08:18.847 #undef SPDK_CONFIG_RAID5F 00:08:18.847 #undef SPDK_CONFIG_RBD 00:08:18.847 #define SPDK_CONFIG_RDMA 1 00:08:18.847 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:18.847 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:18.847 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:18.847 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:18.847 #define SPDK_CONFIG_SHARED 1 00:08:18.847 #undef SPDK_CONFIG_SMA 00:08:18.847 #define SPDK_CONFIG_TESTS 1 00:08:18.847 #undef SPDK_CONFIG_TSAN 00:08:18.847 #define SPDK_CONFIG_UBLK 1 00:08:18.847 #define SPDK_CONFIG_UBSAN 1 00:08:18.847 #undef SPDK_CONFIG_UNIT_TESTS 00:08:18.847 #undef SPDK_CONFIG_URING 00:08:18.847 #define SPDK_CONFIG_URING_PATH 00:08:18.847 #undef SPDK_CONFIG_URING_ZNS 00:08:18.847 #undef SPDK_CONFIG_USDT 00:08:18.847 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:18.847 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:18.847 #define SPDK_CONFIG_VFIO_USER 1 00:08:18.847 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:18.847 #define SPDK_CONFIG_VHOST 1 00:08:18.847 #define SPDK_CONFIG_VIRTIO 1 00:08:18.847 #undef SPDK_CONFIG_VTUNE 00:08:18.847 #define SPDK_CONFIG_VTUNE_DIR 00:08:18.847 #define SPDK_CONFIG_WERROR 1 00:08:18.847 #define SPDK_CONFIG_WPDK_DIR 00:08:18.847 #undef SPDK_CONFIG_XNVME 00:08:18.847 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:18.847 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 507680 ]] 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 507680 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.3G6XGc 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.3G6XGc/tests/target /tmp/spdk.3G6XGc 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:18.848 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=956096512 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4328333312 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=122834665472 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370980352 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6536314880 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680779776 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864364032 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9834496 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=390144 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=112640 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684765184 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=724992 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:18.849 * Looking for test storage... 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.849 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=122834665472 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8750907392 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.109 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.110 15:13:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:27.254 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:27.255 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:27.255 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:27.255 Found net devices under 0000:31:00.0: cvl_0_0 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:27.255 Found net devices under 0000:31:00.1: cvl_0_1 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:27.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.725 ms 00:08:27.255 00:08:27.255 --- 10.0.0.2 ping statistics --- 00:08:27.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.255 rtt min/avg/max/mdev = 0.725/0.725/0.725/0.000 ms 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:08:27.255 00:08:27.255 --- 10.0.0.1 ping statistics --- 00:08:27.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.255 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:27.255 ************************************ 00:08:27.255 START TEST nvmf_filesystem_no_in_capsule 00:08:27.255 ************************************ 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=511852 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 511852 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 511852 ']' 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.255 15:13:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.255 [2024-07-15 15:13:36.515405] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:08:27.255 [2024-07-15 15:13:36.515452] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.255 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.255 [2024-07-15 15:13:36.589225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.255 [2024-07-15 15:13:36.659693] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.255 [2024-07-15 15:13:36.659732] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.255 [2024-07-15 15:13:36.659739] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.255 [2024-07-15 15:13:36.659746] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.255 [2024-07-15 15:13:36.659752] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.255 [2024-07-15 15:13:36.659859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.255 [2024-07-15 15:13:36.659975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.255 [2024-07-15 15:13:36.660022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.255 [2024-07-15 15:13:36.660024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.825 [2024-07-15 15:13:37.333516] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.825 Malloc1 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.825 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:28.085 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.085 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.085 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.085 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:28.085 [2024-07-15 15:13:37.462349] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.085 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.085 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:28.085 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:28.085 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:28.085 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:28.085 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:28.085 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:28.085 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.085 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:28.085 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:28.085 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:28.085 { 00:08:28.085 "name": "Malloc1", 00:08:28.085 "aliases": [ 00:08:28.085 "9a3a2d1c-38a1-4c9a-91f6-14ea730e1415" 00:08:28.085 ], 00:08:28.085 "product_name": "Malloc disk", 00:08:28.085 "block_size": 512, 00:08:28.085 "num_blocks": 1048576, 00:08:28.085 "uuid": "9a3a2d1c-38a1-4c9a-91f6-14ea730e1415", 00:08:28.085 "assigned_rate_limits": { 00:08:28.085 "rw_ios_per_sec": 0, 00:08:28.085 "rw_mbytes_per_sec": 0, 00:08:28.085 "r_mbytes_per_sec": 0, 00:08:28.085 "w_mbytes_per_sec": 0 00:08:28.085 }, 00:08:28.085 "claimed": true, 00:08:28.085 "claim_type": "exclusive_write", 00:08:28.085 "zoned": false, 00:08:28.085 "supported_io_types": { 00:08:28.085 "read": true, 00:08:28.085 "write": true, 00:08:28.085 "unmap": true, 00:08:28.085 "flush": true, 00:08:28.085 "reset": true, 00:08:28.085 "nvme_admin": false, 00:08:28.085 "nvme_io": false, 00:08:28.085 "nvme_io_md": false, 00:08:28.085 "write_zeroes": true, 00:08:28.085 "zcopy": true, 00:08:28.085 "get_zone_info": false, 00:08:28.085 "zone_management": false, 00:08:28.085 "zone_append": false, 00:08:28.085 "compare": false, 00:08:28.085 "compare_and_write": false, 00:08:28.085 "abort": true, 00:08:28.085 "seek_hole": false, 00:08:28.085 "seek_data": false, 00:08:28.085 "copy": true, 00:08:28.085 "nvme_iov_md": false 00:08:28.085 }, 00:08:28.085 "memory_domains": [ 00:08:28.085 { 00:08:28.085 "dma_device_id": "system", 00:08:28.085 "dma_device_type": 1 00:08:28.085 }, 00:08:28.085 { 00:08:28.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.085 "dma_device_type": 2 00:08:28.085 } 00:08:28.085 ], 00:08:28.085 "driver_specific": {} 00:08:28.085 } 00:08:28.085 ]' 00:08:28.085 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:28.086 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:28.086 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:28.086 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:28.086 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:28.086 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:28.086 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:28.086 15:13:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:29.466 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:29.466 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:29.466 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:29.466 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:29.466 15:13:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:32.005 15:13:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:32.574 15:13:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:33.956 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:33.956 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:33.956 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:33.956 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.956 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:33.956 ************************************ 00:08:33.956 START TEST filesystem_ext4 00:08:33.956 ************************************ 00:08:33.956 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:33.956 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:33.956 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:33.956 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:33.956 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:33.956 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:33.956 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:33.956 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:33.956 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:33.956 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:33.956 15:13:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:33.956 mke2fs 1.46.5 (30-Dec-2021) 00:08:33.956 Discarding device blocks: 0/522240 done 00:08:33.956 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:33.956 Filesystem UUID: 951566cc-3b63-45b1-836d-f8d432b7655b 00:08:33.956 Superblock backups stored on blocks: 00:08:33.956 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:33.956 00:08:33.956 Allocating group tables: 0/64 done 00:08:33.956 Writing inode tables: 0/64 done 00:08:36.497 Creating journal (8192 blocks): done 00:08:36.497 Writing superblocks and filesystem accounting information: 0/64 done 00:08:36.497 00:08:36.497 15:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:36.498 15:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:37.437 15:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:37.437 15:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:37.437 15:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:37.438 15:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:37.438 15:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:37.438 15:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:37.438 15:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 511852 00:08:37.438 15:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:37.438 15:13:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:37.438 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:37.438 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:37.438 00:08:37.438 real 0m3.806s 00:08:37.438 user 0m0.030s 00:08:37.438 sys 0m0.046s 00:08:37.438 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.438 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:37.438 ************************************ 00:08:37.438 END TEST filesystem_ext4 00:08:37.438 ************************************ 00:08:37.438 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:37.438 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:37.438 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:37.438 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.438 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:37.697 ************************************ 00:08:37.697 START TEST filesystem_btrfs 00:08:37.697 ************************************ 00:08:37.697 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:37.697 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:37.697 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:37.697 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:37.697 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:37.697 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:37.697 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:37.697 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:37.697 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:37.697 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:37.697 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:37.957 btrfs-progs v6.6.2 00:08:37.957 See https://btrfs.readthedocs.io for more information. 00:08:37.957 00:08:37.957 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:37.957 NOTE: several default settings have changed in version 5.15, please make sure 00:08:37.957 this does not affect your deployments: 00:08:37.957 - DUP for metadata (-m dup) 00:08:37.957 - enabled no-holes (-O no-holes) 00:08:37.957 - enabled free-space-tree (-R free-space-tree) 00:08:37.957 00:08:37.957 Label: (null) 00:08:37.957 UUID: 32a9f4cf-f24e-4ad9-bc4c-f4233f13112e 00:08:37.957 Node size: 16384 00:08:37.957 Sector size: 4096 00:08:37.957 Filesystem size: 510.00MiB 00:08:37.957 Block group profiles: 00:08:37.957 Data: single 8.00MiB 00:08:37.957 Metadata: DUP 32.00MiB 00:08:37.957 System: DUP 8.00MiB 00:08:37.957 SSD detected: yes 00:08:37.957 Zoned device: no 00:08:37.957 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:37.957 Runtime features: free-space-tree 00:08:37.957 Checksum: crc32c 00:08:37.957 Number of devices: 1 00:08:37.957 Devices: 00:08:37.957 ID SIZE PATH 00:08:37.957 1 510.00MiB /dev/nvme0n1p1 00:08:37.957 00:08:37.957 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:37.957 15:13:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:38.959 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:38.959 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:38.959 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:38.959 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:38.959 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:38.959 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:38.959 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 511852 00:08:38.959 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:38.959 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:38.959 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:38.959 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:38.959 00:08:38.959 real 0m1.287s 00:08:38.960 user 0m0.021s 00:08:38.960 sys 0m0.065s 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:38.960 ************************************ 00:08:38.960 END TEST filesystem_btrfs 00:08:38.960 ************************************ 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:38.960 ************************************ 00:08:38.960 START TEST filesystem_xfs 00:08:38.960 ************************************ 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:38.960 15:13:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:38.960 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:38.960 = sectsz=512 attr=2, projid32bit=1 00:08:38.960 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:38.960 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:38.960 data = bsize=4096 blocks=130560, imaxpct=25 00:08:38.960 = sunit=0 swidth=0 blks 00:08:38.960 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:38.960 log =internal log bsize=4096 blocks=16384, version=2 00:08:38.960 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:38.960 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:40.343 Discarding blocks...Done. 00:08:40.343 15:13:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:40.343 15:13:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:41.727 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:41.987 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:41.987 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:41.987 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:41.987 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:41.987 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:41.987 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 511852 00:08:41.987 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:41.987 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:41.987 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:41.987 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:41.987 00:08:41.987 real 0m3.021s 00:08:41.987 user 0m0.021s 00:08:41.987 sys 0m0.057s 00:08:41.987 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.987 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:41.987 ************************************ 00:08:41.987 END TEST filesystem_xfs 00:08:41.987 ************************************ 00:08:41.987 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:41.987 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:41.987 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:41.987 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:42.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 511852 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 511852 ']' 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 511852 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 511852 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 511852' 00:08:42.248 killing process with pid 511852 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 511852 00:08:42.248 15:13:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 511852 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:42.508 00:08:42.508 real 0m15.581s 00:08:42.508 user 1m1.467s 00:08:42.508 sys 0m1.099s 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.508 ************************************ 00:08:42.508 END TEST nvmf_filesystem_no_in_capsule 00:08:42.508 ************************************ 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:42.508 ************************************ 00:08:42.508 START TEST nvmf_filesystem_in_capsule 00:08:42.508 ************************************ 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=515100 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 515100 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 515100 ']' 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:42.508 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.768 [2024-07-15 15:13:52.177655] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:08:42.768 [2024-07-15 15:13:52.177704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.768 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.768 [2024-07-15 15:13:52.249433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.768 [2024-07-15 15:13:52.319545] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.768 [2024-07-15 15:13:52.319593] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.768 [2024-07-15 15:13:52.319601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.768 [2024-07-15 15:13:52.319607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.768 [2024-07-15 15:13:52.319613] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.769 [2024-07-15 15:13:52.319723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.769 [2024-07-15 15:13:52.319855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.769 [2024-07-15 15:13:52.320018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.769 [2024-07-15 15:13:52.320018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.338 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:43.338 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:43.338 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:43.338 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:43.338 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.598 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.598 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:43.598 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:43.598 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.598 15:13:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.598 [2024-07-15 15:13:52.998544] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.598 Malloc1 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.598 [2024-07-15 15:13:53.127153] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.598 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:43.598 { 00:08:43.598 "name": "Malloc1", 00:08:43.598 "aliases": [ 00:08:43.598 "b978db65-b9c9-450b-9d33-ed6977da59ac" 00:08:43.598 ], 00:08:43.598 "product_name": "Malloc disk", 00:08:43.598 "block_size": 512, 00:08:43.598 "num_blocks": 1048576, 00:08:43.598 "uuid": "b978db65-b9c9-450b-9d33-ed6977da59ac", 00:08:43.598 "assigned_rate_limits": { 00:08:43.598 "rw_ios_per_sec": 0, 00:08:43.598 "rw_mbytes_per_sec": 0, 00:08:43.598 "r_mbytes_per_sec": 0, 00:08:43.598 "w_mbytes_per_sec": 0 00:08:43.598 }, 00:08:43.598 "claimed": true, 00:08:43.598 "claim_type": "exclusive_write", 00:08:43.598 "zoned": false, 00:08:43.598 "supported_io_types": { 00:08:43.598 "read": true, 00:08:43.598 "write": true, 00:08:43.598 "unmap": true, 00:08:43.598 "flush": true, 00:08:43.598 "reset": true, 00:08:43.598 "nvme_admin": false, 00:08:43.598 "nvme_io": false, 00:08:43.598 "nvme_io_md": false, 00:08:43.598 "write_zeroes": true, 00:08:43.598 "zcopy": true, 00:08:43.598 "get_zone_info": false, 00:08:43.598 "zone_management": false, 00:08:43.598 "zone_append": false, 00:08:43.598 "compare": false, 00:08:43.598 "compare_and_write": false, 00:08:43.598 "abort": true, 00:08:43.598 "seek_hole": false, 00:08:43.598 "seek_data": false, 00:08:43.598 "copy": true, 00:08:43.599 "nvme_iov_md": false 00:08:43.599 }, 00:08:43.599 "memory_domains": [ 00:08:43.599 { 00:08:43.599 "dma_device_id": "system", 00:08:43.599 "dma_device_type": 1 00:08:43.599 }, 00:08:43.599 { 00:08:43.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.599 "dma_device_type": 2 00:08:43.599 } 00:08:43.599 ], 00:08:43.599 "driver_specific": {} 00:08:43.599 } 00:08:43.599 ]' 00:08:43.599 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:43.599 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:43.599 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:43.857 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:43.857 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:43.857 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:43.858 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:43.858 15:13:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:45.235 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:45.235 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:45.235 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:45.235 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:45.235 15:13:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:47.145 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:47.405 15:13:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:48.343 15:13:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:49.285 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:49.285 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:49.285 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:49.285 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.285 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:49.285 ************************************ 00:08:49.285 START TEST filesystem_in_capsule_ext4 00:08:49.285 ************************************ 00:08:49.285 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:49.285 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:49.285 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:49.285 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:49.286 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:49.286 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:49.286 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:49.286 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:49.286 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:49.286 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:49.286 15:13:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:49.286 mke2fs 1.46.5 (30-Dec-2021) 00:08:49.286 Discarding device blocks: 0/522240 done 00:08:49.286 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:49.286 Filesystem UUID: 6629d70c-9d0b-4c03-abd5-7d0988f525fc 00:08:49.286 Superblock backups stored on blocks: 00:08:49.286 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:49.286 00:08:49.286 Allocating group tables: 0/64 done 00:08:49.286 Writing inode tables: 0/64 done 00:08:49.546 Creating journal (8192 blocks): done 00:08:50.489 Writing superblocks and filesystem accounting information: 0/64 done 00:08:50.489 00:08:50.489 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:50.489 15:13:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:50.750 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:50.750 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:50.750 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:50.750 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:50.750 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:50.750 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:50.750 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 515100 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:50.751 00:08:50.751 real 0m1.529s 00:08:50.751 user 0m0.029s 00:08:50.751 sys 0m0.045s 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:50.751 ************************************ 00:08:50.751 END TEST filesystem_in_capsule_ext4 00:08:50.751 ************************************ 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:50.751 ************************************ 00:08:50.751 START TEST filesystem_in_capsule_btrfs 00:08:50.751 ************************************ 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:50.751 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:51.011 btrfs-progs v6.6.2 00:08:51.011 See https://btrfs.readthedocs.io for more information. 00:08:51.011 00:08:51.011 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:51.011 NOTE: several default settings have changed in version 5.15, please make sure 00:08:51.011 this does not affect your deployments: 00:08:51.011 - DUP for metadata (-m dup) 00:08:51.011 - enabled no-holes (-O no-holes) 00:08:51.011 - enabled free-space-tree (-R free-space-tree) 00:08:51.011 00:08:51.011 Label: (null) 00:08:51.011 UUID: 28671d46-05ed-4245-9db6-e7f7c1fd538b 00:08:51.011 Node size: 16384 00:08:51.011 Sector size: 4096 00:08:51.011 Filesystem size: 510.00MiB 00:08:51.011 Block group profiles: 00:08:51.011 Data: single 8.00MiB 00:08:51.011 Metadata: DUP 32.00MiB 00:08:51.011 System: DUP 8.00MiB 00:08:51.011 SSD detected: yes 00:08:51.011 Zoned device: no 00:08:51.011 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:51.011 Runtime features: free-space-tree 00:08:51.011 Checksum: crc32c 00:08:51.011 Number of devices: 1 00:08:51.011 Devices: 00:08:51.011 ID SIZE PATH 00:08:51.011 1 510.00MiB /dev/nvme0n1p1 00:08:51.011 00:08:51.011 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:51.011 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:51.583 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:51.583 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:51.583 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:51.583 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:51.583 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:51.583 15:14:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 515100 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:51.583 00:08:51.583 real 0m0.683s 00:08:51.583 user 0m0.024s 00:08:51.583 sys 0m0.062s 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:51.583 ************************************ 00:08:51.583 END TEST filesystem_in_capsule_btrfs 00:08:51.583 ************************************ 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:51.583 ************************************ 00:08:51.583 START TEST filesystem_in_capsule_xfs 00:08:51.583 ************************************ 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:51.583 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:51.583 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:51.583 = sectsz=512 attr=2, projid32bit=1 00:08:51.583 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:51.583 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:51.583 data = bsize=4096 blocks=130560, imaxpct=25 00:08:51.583 = sunit=0 swidth=0 blks 00:08:51.583 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:51.583 log =internal log bsize=4096 blocks=16384, version=2 00:08:51.583 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:51.583 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:52.523 Discarding blocks...Done. 00:08:52.523 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:52.523 15:14:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 515100 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:55.062 00:08:55.062 real 0m3.171s 00:08:55.062 user 0m0.022s 00:08:55.062 sys 0m0.057s 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:55.062 ************************************ 00:08:55.062 END TEST filesystem_in_capsule_xfs 00:08:55.062 ************************************ 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:55.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:55.062 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:55.063 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.063 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:55.063 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.063 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:55.063 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 515100 00:08:55.063 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 515100 ']' 00:08:55.063 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 515100 00:08:55.063 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:55.063 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:55.063 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 515100 00:08:55.063 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:55.063 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:55.063 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 515100' 00:08:55.063 killing process with pid 515100 00:08:55.063 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 515100 00:08:55.063 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 515100 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:55.322 00:08:55.322 real 0m12.701s 00:08:55.322 user 0m50.044s 00:08:55.322 sys 0m1.032s 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:55.322 ************************************ 00:08:55.322 END TEST nvmf_filesystem_in_capsule 00:08:55.322 ************************************ 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:55.322 rmmod nvme_tcp 00:08:55.322 rmmod nvme_fabrics 00:08:55.322 rmmod nvme_keyring 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:55.322 15:14:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:55.323 15:14:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:55.323 15:14:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:55.323 15:14:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:55.323 15:14:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.323 15:14:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.323 15:14:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.900 15:14:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:57.900 00:08:57.900 real 0m38.793s 00:08:57.900 user 1m53.927s 00:08:57.900 sys 0m8.129s 00:08:57.900 15:14:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.900 15:14:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:57.900 ************************************ 00:08:57.900 END TEST nvmf_filesystem 00:08:57.900 ************************************ 00:08:57.900 15:14:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:57.900 15:14:07 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:57.900 15:14:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:57.901 15:14:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.901 15:14:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:57.901 ************************************ 00:08:57.901 START TEST nvmf_target_discovery 00:08:57.901 ************************************ 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:57.901 * Looking for test storage... 00:08:57.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:57.901 15:14:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:06.060 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:06.060 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:06.060 Found net devices under 0000:31:00.0: cvl_0_0 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.060 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:06.061 Found net devices under 0000:31:00.1: cvl_0_1 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:06.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:09:06.061 00:09:06.061 --- 10.0.0.2 ping statistics --- 00:09:06.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.061 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:09:06.061 00:09:06.061 --- 10.0.0.1 ping statistics --- 00:09:06.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.061 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=522217 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 522217 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 522217 ']' 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:06.061 15:14:14 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.061 [2024-07-15 15:14:14.879078] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:09:06.061 [2024-07-15 15:14:14.879142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.061 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.061 [2024-07-15 15:14:14.954921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.061 [2024-07-15 15:14:15.029794] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.061 [2024-07-15 15:14:15.029832] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.061 [2024-07-15 15:14:15.029839] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.061 [2024-07-15 15:14:15.029846] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.061 [2024-07-15 15:14:15.029851] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.061 [2024-07-15 15:14:15.029905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.061 [2024-07-15 15:14:15.029989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.061 [2024-07-15 15:14:15.030134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.061 [2024-07-15 15:14:15.030134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.061 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.061 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:06.061 15:14:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:06.061 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:06.061 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 [2024-07-15 15:14:15.708481] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 Null1 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 [2024-07-15 15:14:15.764768] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 Null2 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 Null3 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 Null4 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.322 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.323 15:14:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.323 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 4420 00:09:06.583 00:09:06.583 Discovery Log Number of Records 6, Generation counter 6 00:09:06.583 =====Discovery Log Entry 0====== 00:09:06.583 trtype: tcp 00:09:06.583 adrfam: ipv4 00:09:06.583 subtype: current discovery subsystem 00:09:06.583 treq: not required 00:09:06.583 portid: 0 00:09:06.583 trsvcid: 4420 00:09:06.583 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:06.583 traddr: 10.0.0.2 00:09:06.583 eflags: explicit discovery connections, duplicate discovery information 00:09:06.583 sectype: none 00:09:06.583 =====Discovery Log Entry 1====== 00:09:06.583 trtype: tcp 00:09:06.583 adrfam: ipv4 00:09:06.583 subtype: nvme subsystem 00:09:06.583 treq: not required 00:09:06.583 portid: 0 00:09:06.583 trsvcid: 4420 00:09:06.583 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:06.583 traddr: 10.0.0.2 00:09:06.583 eflags: none 00:09:06.583 sectype: none 00:09:06.583 =====Discovery Log Entry 2====== 00:09:06.583 trtype: tcp 00:09:06.583 adrfam: ipv4 00:09:06.583 subtype: nvme subsystem 00:09:06.583 treq: not required 00:09:06.583 portid: 0 00:09:06.583 trsvcid: 4420 00:09:06.583 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:06.583 traddr: 10.0.0.2 00:09:06.583 eflags: none 00:09:06.583 sectype: none 00:09:06.583 =====Discovery Log Entry 3====== 00:09:06.583 trtype: tcp 00:09:06.583 adrfam: ipv4 00:09:06.583 subtype: nvme subsystem 00:09:06.583 treq: not required 00:09:06.583 portid: 0 00:09:06.583 trsvcid: 4420 00:09:06.583 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:06.583 traddr: 10.0.0.2 00:09:06.583 eflags: none 00:09:06.583 sectype: none 00:09:06.583 =====Discovery Log Entry 4====== 00:09:06.583 trtype: tcp 00:09:06.583 adrfam: ipv4 00:09:06.583 subtype: nvme subsystem 00:09:06.583 treq: not required 00:09:06.583 portid: 0 00:09:06.583 trsvcid: 4420 00:09:06.583 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:06.583 traddr: 10.0.0.2 00:09:06.583 eflags: none 00:09:06.583 sectype: none 00:09:06.583 =====Discovery Log Entry 5====== 00:09:06.583 trtype: tcp 00:09:06.583 adrfam: ipv4 00:09:06.583 subtype: discovery subsystem referral 00:09:06.583 treq: not required 00:09:06.583 portid: 0 00:09:06.583 trsvcid: 4430 00:09:06.583 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:06.583 traddr: 10.0.0.2 00:09:06.583 eflags: none 00:09:06.583 sectype: none 00:09:06.583 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:06.583 Perform nvmf subsystem discovery via RPC 00:09:06.583 15:14:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:06.583 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.583 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.583 [ 00:09:06.583 { 00:09:06.583 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:06.583 "subtype": "Discovery", 00:09:06.583 "listen_addresses": [ 00:09:06.583 { 00:09:06.583 "trtype": "TCP", 00:09:06.583 "adrfam": "IPv4", 00:09:06.583 "traddr": "10.0.0.2", 00:09:06.583 "trsvcid": "4420" 00:09:06.583 } 00:09:06.583 ], 00:09:06.583 "allow_any_host": true, 00:09:06.583 "hosts": [] 00:09:06.583 }, 00:09:06.583 { 00:09:06.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.583 "subtype": "NVMe", 00:09:06.583 "listen_addresses": [ 00:09:06.583 { 00:09:06.583 "trtype": "TCP", 00:09:06.583 "adrfam": "IPv4", 00:09:06.583 "traddr": "10.0.0.2", 00:09:06.583 "trsvcid": "4420" 00:09:06.583 } 00:09:06.583 ], 00:09:06.583 "allow_any_host": true, 00:09:06.583 "hosts": [], 00:09:06.583 "serial_number": "SPDK00000000000001", 00:09:06.583 "model_number": "SPDK bdev Controller", 00:09:06.583 "max_namespaces": 32, 00:09:06.583 "min_cntlid": 1, 00:09:06.583 "max_cntlid": 65519, 00:09:06.583 "namespaces": [ 00:09:06.583 { 00:09:06.583 "nsid": 1, 00:09:06.583 "bdev_name": "Null1", 00:09:06.583 "name": "Null1", 00:09:06.583 "nguid": "D2891DFA5CBD418889FF6AC2C06FCE90", 00:09:06.583 "uuid": "d2891dfa-5cbd-4188-89ff-6ac2c06fce90" 00:09:06.583 } 00:09:06.583 ] 00:09:06.583 }, 00:09:06.583 { 00:09:06.583 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:06.583 "subtype": "NVMe", 00:09:06.583 "listen_addresses": [ 00:09:06.583 { 00:09:06.583 "trtype": "TCP", 00:09:06.583 "adrfam": "IPv4", 00:09:06.583 "traddr": "10.0.0.2", 00:09:06.583 "trsvcid": "4420" 00:09:06.583 } 00:09:06.583 ], 00:09:06.583 "allow_any_host": true, 00:09:06.583 "hosts": [], 00:09:06.583 "serial_number": "SPDK00000000000002", 00:09:06.583 "model_number": "SPDK bdev Controller", 00:09:06.583 "max_namespaces": 32, 00:09:06.583 "min_cntlid": 1, 00:09:06.583 "max_cntlid": 65519, 00:09:06.583 "namespaces": [ 00:09:06.583 { 00:09:06.584 "nsid": 1, 00:09:06.584 "bdev_name": "Null2", 00:09:06.584 "name": "Null2", 00:09:06.584 "nguid": "ACA31BF19D7B401390EF383B40AC6EEE", 00:09:06.584 "uuid": "aca31bf1-9d7b-4013-90ef-383b40ac6eee" 00:09:06.584 } 00:09:06.584 ] 00:09:06.584 }, 00:09:06.584 { 00:09:06.584 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:06.584 "subtype": "NVMe", 00:09:06.584 "listen_addresses": [ 00:09:06.584 { 00:09:06.584 "trtype": "TCP", 00:09:06.584 "adrfam": "IPv4", 00:09:06.584 "traddr": "10.0.0.2", 00:09:06.584 "trsvcid": "4420" 00:09:06.584 } 00:09:06.584 ], 00:09:06.584 "allow_any_host": true, 00:09:06.584 "hosts": [], 00:09:06.584 "serial_number": "SPDK00000000000003", 00:09:06.584 "model_number": "SPDK bdev Controller", 00:09:06.584 "max_namespaces": 32, 00:09:06.584 "min_cntlid": 1, 00:09:06.584 "max_cntlid": 65519, 00:09:06.584 "namespaces": [ 00:09:06.584 { 00:09:06.584 "nsid": 1, 00:09:06.584 "bdev_name": "Null3", 00:09:06.584 "name": "Null3", 00:09:06.584 "nguid": "9D4B8AF8F8244B4CA9521920AC11AB73", 00:09:06.584 "uuid": "9d4b8af8-f824-4b4c-a952-1920ac11ab73" 00:09:06.584 } 00:09:06.584 ] 00:09:06.584 }, 00:09:06.584 { 00:09:06.584 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:06.584 "subtype": "NVMe", 00:09:06.584 "listen_addresses": [ 00:09:06.584 { 00:09:06.584 "trtype": "TCP", 00:09:06.584 "adrfam": "IPv4", 00:09:06.584 "traddr": "10.0.0.2", 00:09:06.584 "trsvcid": "4420" 00:09:06.584 } 00:09:06.584 ], 00:09:06.584 "allow_any_host": true, 00:09:06.584 "hosts": [], 00:09:06.584 "serial_number": "SPDK00000000000004", 00:09:06.584 "model_number": "SPDK bdev Controller", 00:09:06.584 "max_namespaces": 32, 00:09:06.584 "min_cntlid": 1, 00:09:06.584 "max_cntlid": 65519, 00:09:06.584 "namespaces": [ 00:09:06.584 { 00:09:06.584 "nsid": 1, 00:09:06.584 "bdev_name": "Null4", 00:09:06.584 "name": "Null4", 00:09:06.584 "nguid": "16763FACDF9C4B198ACE5468DF2B19BE", 00:09:06.584 "uuid": "16763fac-df9c-4b19-8ace-5468df2b19be" 00:09:06.584 } 00:09:06.584 ] 00:09:06.584 } 00:09:06.584 ] 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:06.584 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:06.584 rmmod nvme_tcp 00:09:06.584 rmmod nvme_fabrics 00:09:06.846 rmmod nvme_keyring 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 522217 ']' 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 522217 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 522217 ']' 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 522217 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 522217 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 522217' 00:09:06.846 killing process with pid 522217 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 522217 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 522217 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.846 15:14:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.388 15:14:18 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:09.388 00:09:09.388 real 0m11.403s 00:09:09.388 user 0m7.946s 00:09:09.388 sys 0m5.906s 00:09:09.388 15:14:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.388 15:14:18 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:09.388 ************************************ 00:09:09.388 END TEST nvmf_target_discovery 00:09:09.388 ************************************ 00:09:09.388 15:14:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:09.388 15:14:18 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:09.388 15:14:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:09.388 15:14:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.388 15:14:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.388 ************************************ 00:09:09.388 START TEST nvmf_referrals 00:09:09.388 ************************************ 00:09:09.388 15:14:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:09.388 * Looking for test storage... 00:09:09.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.388 15:14:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.388 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:09.388 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.388 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.388 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:09.389 15:14:18 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:17.541 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:17.541 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:17.541 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:17.542 Found net devices under 0000:31:00.0: cvl_0_0 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:17.542 Found net devices under 0000:31:00.1: cvl_0_1 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.542 15:14:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:17.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.734 ms 00:09:17.542 00:09:17.542 --- 10.0.0.2 ping statistics --- 00:09:17.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.542 rtt min/avg/max/mdev = 0.734/0.734/0.734/0.000 ms 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:09:17.542 00:09:17.542 --- 10.0.0.1 ping statistics --- 00:09:17.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.542 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=527126 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 527126 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 527126 ']' 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:17.542 15:14:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.542 [2024-07-15 15:14:26.346270] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:09:17.542 [2024-07-15 15:14:26.346334] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.542 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.542 [2024-07-15 15:14:26.422276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.542 [2024-07-15 15:14:26.496700] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.542 [2024-07-15 15:14:26.496739] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.542 [2024-07-15 15:14:26.496747] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.542 [2024-07-15 15:14:26.496753] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.542 [2024-07-15 15:14:26.496759] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.542 [2024-07-15 15:14:26.496894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.542 [2024-07-15 15:14:26.496989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.542 [2024-07-15 15:14:26.497146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.542 [2024-07-15 15:14:26.497147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.542 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:17.542 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:17.542 15:14:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:17.542 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:17.542 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.542 15:14:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.542 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:17.542 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.542 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.803 [2024-07-15 15:14:27.163468] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.803 [2024-07-15 15:14:27.179631] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:17.803 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.064 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:18.325 15:14:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:18.586 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:18.587 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:18.848 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.108 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.108 rmmod nvme_tcp 00:09:19.368 rmmod nvme_fabrics 00:09:19.368 rmmod nvme_keyring 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 527126 ']' 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 527126 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 527126 ']' 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 527126 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 527126 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 527126' 00:09:19.368 killing process with pid 527126 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 527126 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 527126 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.368 15:14:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.931 15:14:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:21.931 00:09:21.931 real 0m12.457s 00:09:21.931 user 0m12.850s 00:09:21.932 sys 0m6.188s 00:09:21.932 15:14:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:21.932 15:14:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:21.932 ************************************ 00:09:21.932 END TEST nvmf_referrals 00:09:21.932 ************************************ 00:09:21.932 15:14:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:21.932 15:14:31 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:21.932 15:14:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:21.932 15:14:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.932 15:14:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:21.932 ************************************ 00:09:21.932 START TEST nvmf_connect_disconnect 00:09:21.932 ************************************ 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:21.932 * Looking for test storage... 00:09:21.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:21.932 15:14:31 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:30.124 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:30.125 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:30.125 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:30.125 Found net devices under 0000:31:00.0: cvl_0_0 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:30.125 Found net devices under 0000:31:00.1: cvl_0_1 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:30.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:09:30.125 00:09:30.125 --- 10.0.0.2 ping statistics --- 00:09:30.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.125 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:09:30.125 00:09:30.125 --- 10.0.0.1 ping statistics --- 00:09:30.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.125 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=532118 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 532118 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 532118 ']' 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.125 15:14:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:30.125 [2024-07-15 15:14:38.627302] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:09:30.125 [2024-07-15 15:14:38.627354] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.125 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.125 [2024-07-15 15:14:38.699843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.125 [2024-07-15 15:14:38.766937] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.125 [2024-07-15 15:14:38.766971] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.125 [2024-07-15 15:14:38.766978] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.125 [2024-07-15 15:14:38.766987] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.125 [2024-07-15 15:14:38.766993] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.125 [2024-07-15 15:14:38.767130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.125 [2024-07-15 15:14:38.767265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.125 [2024-07-15 15:14:38.767422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.125 [2024-07-15 15:14:38.767423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:30.125 [2024-07-15 15:14:39.442576] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:30.125 [2024-07-15 15:14:39.501888] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:30.125 15:14:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:34.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:48.404 rmmod nvme_tcp 00:09:48.404 rmmod nvme_fabrics 00:09:48.404 rmmod nvme_keyring 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 532118 ']' 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 532118 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 532118 ']' 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 532118 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 532118 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 532118' 00:09:48.404 killing process with pid 532118 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 532118 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 532118 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:48.404 15:14:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.315 15:14:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:50.315 00:09:50.315 real 0m28.707s 00:09:50.315 user 1m17.620s 00:09:50.315 sys 0m6.526s 00:09:50.315 15:14:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.315 15:14:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:50.315 ************************************ 00:09:50.315 END TEST nvmf_connect_disconnect 00:09:50.315 ************************************ 00:09:50.315 15:14:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:50.315 15:14:59 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:50.315 15:14:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:50.315 15:14:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.315 15:14:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:50.315 ************************************ 00:09:50.315 START TEST nvmf_multitarget 00:09:50.315 ************************************ 00:09:50.315 15:14:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:50.574 * Looking for test storage... 00:09:50.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:50.574 15:14:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:50.574 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:50.574 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.574 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.574 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:50.575 15:15:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:58.804 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:58.804 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:58.804 Found net devices under 0000:31:00.0: cvl_0_0 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:58.804 Found net devices under 0000:31:00.1: cvl_0_1 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:58.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:09:58.804 00:09:58.804 --- 10.0.0.2 ping statistics --- 00:09:58.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.804 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:09:58.804 00:09:58.804 --- 10.0.0.1 ping statistics --- 00:09:58.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.804 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=540577 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 540577 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 540577 ']' 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.804 15:15:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:58.804 [2024-07-15 15:15:07.696167] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:09:58.804 [2024-07-15 15:15:07.696218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.804 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.804 [2024-07-15 15:15:07.765632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.804 [2024-07-15 15:15:07.830824] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.804 [2024-07-15 15:15:07.830859] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.804 [2024-07-15 15:15:07.830866] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.804 [2024-07-15 15:15:07.830873] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.804 [2024-07-15 15:15:07.830878] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.804 [2024-07-15 15:15:07.830920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.804 [2024-07-15 15:15:07.831132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.804 [2024-07-15 15:15:07.831275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.804 [2024-07-15 15:15:07.831276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.064 15:15:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:59.064 15:15:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:59.064 15:15:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.064 15:15:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:59.064 15:15:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:59.064 15:15:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.064 15:15:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:59.064 15:15:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:59.064 15:15:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:59.064 15:15:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:59.064 15:15:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:59.325 "nvmf_tgt_1" 00:09:59.325 15:15:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:59.325 "nvmf_tgt_2" 00:09:59.325 15:15:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:59.325 15:15:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:59.325 15:15:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:59.325 15:15:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:59.585 true 00:09:59.585 15:15:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:59.585 true 00:09:59.585 15:15:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:59.585 15:15:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:59.585 15:15:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:59.585 15:15:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:59.585 15:15:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:59.585 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:59.585 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:59.845 rmmod nvme_tcp 00:09:59.845 rmmod nvme_fabrics 00:09:59.845 rmmod nvme_keyring 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 540577 ']' 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 540577 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 540577 ']' 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 540577 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 540577 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 540577' 00:09:59.845 killing process with pid 540577 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 540577 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 540577 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.845 15:15:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.389 15:15:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:02.389 00:10:02.389 real 0m11.627s 00:10:02.389 user 0m9.388s 00:10:02.389 sys 0m6.019s 00:10:02.389 15:15:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.389 15:15:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:02.389 ************************************ 00:10:02.389 END TEST nvmf_multitarget 00:10:02.389 ************************************ 00:10:02.389 15:15:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:02.389 15:15:11 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:02.389 15:15:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:02.389 15:15:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.389 15:15:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:02.389 ************************************ 00:10:02.389 START TEST nvmf_rpc 00:10:02.389 ************************************ 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:02.389 * Looking for test storage... 00:10:02.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:10:02.389 15:15:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:10.532 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:10.532 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:10.532 Found net devices under 0000:31:00.0: cvl_0_0 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:10.532 Found net devices under 0000:31:00.1: cvl_0_1 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:10.532 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:10.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.755 ms 00:10:10.533 00:10:10.533 --- 10.0.0.2 ping statistics --- 00:10:10.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.533 rtt min/avg/max/mdev = 0.755/0.755/0.755/0.000 ms 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:10.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:10:10.533 00:10:10.533 --- 10.0.0.1 ping statistics --- 00:10:10.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.533 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=545962 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 545962 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 545962 ']' 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.533 15:15:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.533 [2024-07-15 15:15:19.597693] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:10:10.533 [2024-07-15 15:15:19.597744] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.533 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.533 [2024-07-15 15:15:19.669389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.533 [2024-07-15 15:15:19.735188] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.533 [2024-07-15 15:15:19.735223] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.533 [2024-07-15 15:15:19.735230] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.533 [2024-07-15 15:15:19.735236] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.533 [2024-07-15 15:15:19.735242] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.533 [2024-07-15 15:15:19.735354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.533 [2024-07-15 15:15:19.735470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.533 [2024-07-15 15:15:19.735627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.533 [2024-07-15 15:15:19.735628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.794 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:10.794 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:10.794 15:15:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:10.794 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:10.794 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.794 15:15:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.794 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:10.794 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.794 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:11.055 "tick_rate": 2400000000, 00:10:11.055 "poll_groups": [ 00:10:11.055 { 00:10:11.055 "name": "nvmf_tgt_poll_group_000", 00:10:11.055 "admin_qpairs": 0, 00:10:11.055 "io_qpairs": 0, 00:10:11.055 "current_admin_qpairs": 0, 00:10:11.055 "current_io_qpairs": 0, 00:10:11.055 "pending_bdev_io": 0, 00:10:11.055 "completed_nvme_io": 0, 00:10:11.055 "transports": [] 00:10:11.055 }, 00:10:11.055 { 00:10:11.055 "name": "nvmf_tgt_poll_group_001", 00:10:11.055 "admin_qpairs": 0, 00:10:11.055 "io_qpairs": 0, 00:10:11.055 "current_admin_qpairs": 0, 00:10:11.055 "current_io_qpairs": 0, 00:10:11.055 "pending_bdev_io": 0, 00:10:11.055 "completed_nvme_io": 0, 00:10:11.055 "transports": [] 00:10:11.055 }, 00:10:11.055 { 00:10:11.055 "name": "nvmf_tgt_poll_group_002", 00:10:11.055 "admin_qpairs": 0, 00:10:11.055 "io_qpairs": 0, 00:10:11.055 "current_admin_qpairs": 0, 00:10:11.055 "current_io_qpairs": 0, 00:10:11.055 "pending_bdev_io": 0, 00:10:11.055 "completed_nvme_io": 0, 00:10:11.055 "transports": [] 00:10:11.055 }, 00:10:11.055 { 00:10:11.055 "name": "nvmf_tgt_poll_group_003", 00:10:11.055 "admin_qpairs": 0, 00:10:11.055 "io_qpairs": 0, 00:10:11.055 "current_admin_qpairs": 0, 00:10:11.055 "current_io_qpairs": 0, 00:10:11.055 "pending_bdev_io": 0, 00:10:11.055 "completed_nvme_io": 0, 00:10:11.055 "transports": [] 00:10:11.055 } 00:10:11.055 ] 00:10:11.055 }' 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.055 [2024-07-15 15:15:20.532831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.055 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:11.055 "tick_rate": 2400000000, 00:10:11.055 "poll_groups": [ 00:10:11.055 { 00:10:11.055 "name": "nvmf_tgt_poll_group_000", 00:10:11.055 "admin_qpairs": 0, 00:10:11.055 "io_qpairs": 0, 00:10:11.055 "current_admin_qpairs": 0, 00:10:11.055 "current_io_qpairs": 0, 00:10:11.055 "pending_bdev_io": 0, 00:10:11.055 "completed_nvme_io": 0, 00:10:11.055 "transports": [ 00:10:11.055 { 00:10:11.055 "trtype": "TCP" 00:10:11.055 } 00:10:11.055 ] 00:10:11.055 }, 00:10:11.055 { 00:10:11.055 "name": "nvmf_tgt_poll_group_001", 00:10:11.055 "admin_qpairs": 0, 00:10:11.055 "io_qpairs": 0, 00:10:11.055 "current_admin_qpairs": 0, 00:10:11.055 "current_io_qpairs": 0, 00:10:11.055 "pending_bdev_io": 0, 00:10:11.055 "completed_nvme_io": 0, 00:10:11.055 "transports": [ 00:10:11.055 { 00:10:11.055 "trtype": "TCP" 00:10:11.055 } 00:10:11.055 ] 00:10:11.055 }, 00:10:11.055 { 00:10:11.055 "name": "nvmf_tgt_poll_group_002", 00:10:11.055 "admin_qpairs": 0, 00:10:11.055 "io_qpairs": 0, 00:10:11.055 "current_admin_qpairs": 0, 00:10:11.055 "current_io_qpairs": 0, 00:10:11.055 "pending_bdev_io": 0, 00:10:11.055 "completed_nvme_io": 0, 00:10:11.055 "transports": [ 00:10:11.055 { 00:10:11.055 "trtype": "TCP" 00:10:11.055 } 00:10:11.055 ] 00:10:11.055 }, 00:10:11.055 { 00:10:11.055 "name": "nvmf_tgt_poll_group_003", 00:10:11.055 "admin_qpairs": 0, 00:10:11.055 "io_qpairs": 0, 00:10:11.055 "current_admin_qpairs": 0, 00:10:11.055 "current_io_qpairs": 0, 00:10:11.055 "pending_bdev_io": 0, 00:10:11.055 "completed_nvme_io": 0, 00:10:11.055 "transports": [ 00:10:11.055 { 00:10:11.055 "trtype": "TCP" 00:10:11.055 } 00:10:11.055 ] 00:10:11.055 } 00:10:11.055 ] 00:10:11.055 }' 00:10:11.056 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:11.056 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:11.056 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:11.056 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:11.056 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:11.056 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:11.056 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:11.056 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:11.056 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:11.056 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:11.056 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:11.056 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:11.056 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:11.056 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:11.056 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.056 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.316 Malloc1 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.316 [2024-07-15 15:15:20.720510] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.2 -s 4420 00:10:11.316 [2024-07-15 15:15:20.747370] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:10:11.316 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:11.316 could not add new controller: failed to write to nvme-fabrics device 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:11.316 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:11.317 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:11.317 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.317 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.317 15:15:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.317 15:15:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:12.699 15:15:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:12.699 15:15:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:12.699 15:15:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:12.699 15:15:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:12.699 15:15:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:15.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.238 [2024-07-15 15:15:24.523437] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6' 00:10:15.238 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:15.238 could not add new controller: failed to write to nvme-fabrics device 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.238 15:15:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:16.629 15:15:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:16.629 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:16.629 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:16.629 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:16.629 15:15:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:18.539 15:15:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:18.539 15:15:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:18.539 15:15:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:18.539 15:15:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:18.539 15:15:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:18.539 15:15:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:18.539 15:15:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.539 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:18.539 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:18.539 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:18.539 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.539 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:18.539 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.539 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:18.539 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.539 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.539 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.539 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.539 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:18.539 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:18.539 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:18.539 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.539 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.798 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.798 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.798 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.798 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.798 [2024-07-15 15:15:28.170339] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.798 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.798 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:18.798 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.798 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.798 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.798 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:18.798 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.798 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.798 15:15:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.798 15:15:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:20.179 15:15:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:20.179 15:15:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:20.179 15:15:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:20.179 15:15:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:20.179 15:15:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:22.159 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:22.159 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:22.159 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:22.159 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:22.159 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:22.159 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:22.159 15:15:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:22.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.159 15:15:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:22.159 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:22.159 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:22.159 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:22.159 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:22.159 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.420 [2024-07-15 15:15:31.826775] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.420 15:15:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.803 15:15:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:23.803 15:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:23.803 15:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.803 15:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:23.803 15:15:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:26.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.346 [2024-07-15 15:15:35.513887] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.346 15:15:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:27.727 15:15:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:27.727 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:27.727 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:27.727 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:27.727 15:15:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:29.636 15:15:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:29.636 15:15:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:29.636 15:15:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:29.636 15:15:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:29.636 15:15:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:29.636 15:15:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:29.636 15:15:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:29.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.636 [2024-07-15 15:15:39.172909] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.636 15:15:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:31.544 15:15:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:31.544 15:15:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:31.544 15:15:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:31.544 15:15:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:31.544 15:15:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:33.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.453 [2024-07-15 15:15:42.866771] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.453 15:15:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:34.837 15:15:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:34.837 15:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:34.837 15:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:34.837 15:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:34.837 15:15:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:36.751 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:36.751 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:36.751 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:36.751 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:36.751 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:36.751 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:36.751 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:37.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.010 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:37.010 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:37.010 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:37.010 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.010 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:37.010 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.010 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:37.010 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.010 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.010 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.010 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.010 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.010 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.010 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.010 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.011 [2024-07-15 15:15:46.528456] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.011 [2024-07-15 15:15:46.588599] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.011 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.271 [2024-07-15 15:15:46.652778] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.271 [2024-07-15 15:15:46.708957] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:37.271 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.272 [2024-07-15 15:15:46.765139] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:37.272 "tick_rate": 2400000000, 00:10:37.272 "poll_groups": [ 00:10:37.272 { 00:10:37.272 "name": "nvmf_tgt_poll_group_000", 00:10:37.272 "admin_qpairs": 0, 00:10:37.272 "io_qpairs": 224, 00:10:37.272 "current_admin_qpairs": 0, 00:10:37.272 "current_io_qpairs": 0, 00:10:37.272 "pending_bdev_io": 0, 00:10:37.272 "completed_nvme_io": 279, 00:10:37.272 "transports": [ 00:10:37.272 { 00:10:37.272 "trtype": "TCP" 00:10:37.272 } 00:10:37.272 ] 00:10:37.272 }, 00:10:37.272 { 00:10:37.272 "name": "nvmf_tgt_poll_group_001", 00:10:37.272 "admin_qpairs": 1, 00:10:37.272 "io_qpairs": 223, 00:10:37.272 "current_admin_qpairs": 0, 00:10:37.272 "current_io_qpairs": 0, 00:10:37.272 "pending_bdev_io": 0, 00:10:37.272 "completed_nvme_io": 313, 00:10:37.272 "transports": [ 00:10:37.272 { 00:10:37.272 "trtype": "TCP" 00:10:37.272 } 00:10:37.272 ] 00:10:37.272 }, 00:10:37.272 { 00:10:37.272 "name": "nvmf_tgt_poll_group_002", 00:10:37.272 "admin_qpairs": 6, 00:10:37.272 "io_qpairs": 218, 00:10:37.272 "current_admin_qpairs": 0, 00:10:37.272 "current_io_qpairs": 0, 00:10:37.272 "pending_bdev_io": 0, 00:10:37.272 "completed_nvme_io": 422, 00:10:37.272 "transports": [ 00:10:37.272 { 00:10:37.272 "trtype": "TCP" 00:10:37.272 } 00:10:37.272 ] 00:10:37.272 }, 00:10:37.272 { 00:10:37.272 "name": "nvmf_tgt_poll_group_003", 00:10:37.272 "admin_qpairs": 0, 00:10:37.272 "io_qpairs": 224, 00:10:37.272 "current_admin_qpairs": 0, 00:10:37.272 "current_io_qpairs": 0, 00:10:37.272 "pending_bdev_io": 0, 00:10:37.272 "completed_nvme_io": 225, 00:10:37.272 "transports": [ 00:10:37.272 { 00:10:37.272 "trtype": "TCP" 00:10:37.272 } 00:10:37.272 ] 00:10:37.272 } 00:10:37.272 ] 00:10:37.272 }' 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:37.272 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:37.532 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:10:37.532 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:37.532 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:37.532 15:15:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:37.532 15:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:37.532 15:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:37.532 15:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:37.532 15:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:37.532 15:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:37.532 15:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:37.532 rmmod nvme_tcp 00:10:37.532 rmmod nvme_fabrics 00:10:37.532 rmmod nvme_keyring 00:10:37.532 15:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:37.532 15:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:37.532 15:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:37.533 15:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 545962 ']' 00:10:37.533 15:15:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 545962 00:10:37.533 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 545962 ']' 00:10:37.533 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 545962 00:10:37.533 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:10:37.533 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:37.533 15:15:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 545962 00:10:37.533 15:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:37.533 15:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:37.533 15:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 545962' 00:10:37.533 killing process with pid 545962 00:10:37.533 15:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 545962 00:10:37.533 15:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 545962 00:10:37.792 15:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:37.792 15:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:37.792 15:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:37.792 15:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:37.792 15:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:37.792 15:15:47 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.792 15:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.792 15:15:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.700 15:15:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:39.700 00:10:39.700 real 0m37.647s 00:10:39.700 user 1m51.800s 00:10:39.700 sys 0m7.405s 00:10:39.700 15:15:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:39.700 15:15:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.700 ************************************ 00:10:39.700 END TEST nvmf_rpc 00:10:39.700 ************************************ 00:10:39.700 15:15:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:39.700 15:15:49 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:39.700 15:15:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:39.700 15:15:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.700 15:15:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:39.960 ************************************ 00:10:39.960 START TEST nvmf_invalid 00:10:39.960 ************************************ 00:10:39.960 15:15:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:39.960 * Looking for test storage... 00:10:39.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.960 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:39.961 15:15:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:48.148 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.148 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:48.148 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:48.148 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:48.149 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:48.149 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:48.149 Found net devices under 0000:31:00.0: cvl_0_0 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:48.149 Found net devices under 0000:31:00.1: cvl_0_1 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.149 15:15:56 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:48.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:10:48.149 00:10:48.149 --- 10.0.0.2 ping statistics --- 00:10:48.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.149 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:10:48.149 00:10:48.149 --- 10.0.0.1 ping statistics --- 00:10:48.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.149 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=556042 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 556042 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 556042 ']' 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:48.149 15:15:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:48.149 [2024-07-15 15:15:57.234853] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:10:48.149 [2024-07-15 15:15:57.234937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.149 EAL: No free 2048 kB hugepages reported on node 1 00:10:48.149 [2024-07-15 15:15:57.311008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.149 [2024-07-15 15:15:57.385481] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.149 [2024-07-15 15:15:57.385521] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.149 [2024-07-15 15:15:57.385528] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.149 [2024-07-15 15:15:57.385535] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.149 [2024-07-15 15:15:57.385540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.149 [2024-07-15 15:15:57.385647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.149 [2024-07-15 15:15:57.385765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.149 [2024-07-15 15:15:57.385939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.149 [2024-07-15 15:15:57.385939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.409 15:15:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:48.409 15:15:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:10:48.409 15:15:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:48.409 15:15:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:48.409 15:15:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:48.669 15:15:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.669 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:48.669 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24118 00:10:48.669 [2024-07-15 15:15:58.203916] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:48.669 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:48.669 { 00:10:48.669 "nqn": "nqn.2016-06.io.spdk:cnode24118", 00:10:48.669 "tgt_name": "foobar", 00:10:48.669 "method": "nvmf_create_subsystem", 00:10:48.669 "req_id": 1 00:10:48.669 } 00:10:48.669 Got JSON-RPC error response 00:10:48.669 response: 00:10:48.669 { 00:10:48.669 "code": -32603, 00:10:48.669 "message": "Unable to find target foobar" 00:10:48.669 }' 00:10:48.669 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:48.669 { 00:10:48.669 "nqn": "nqn.2016-06.io.spdk:cnode24118", 00:10:48.669 "tgt_name": "foobar", 00:10:48.669 "method": "nvmf_create_subsystem", 00:10:48.669 "req_id": 1 00:10:48.669 } 00:10:48.669 Got JSON-RPC error response 00:10:48.669 response: 00:10:48.669 { 00:10:48.669 "code": -32603, 00:10:48.669 "message": "Unable to find target foobar" 00:10:48.669 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:48.669 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:48.669 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22262 00:10:48.929 [2024-07-15 15:15:58.380512] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22262: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:48.929 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:48.929 { 00:10:48.930 "nqn": "nqn.2016-06.io.spdk:cnode22262", 00:10:48.930 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:48.930 "method": "nvmf_create_subsystem", 00:10:48.930 "req_id": 1 00:10:48.930 } 00:10:48.930 Got JSON-RPC error response 00:10:48.930 response: 00:10:48.930 { 00:10:48.930 "code": -32602, 00:10:48.930 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:48.930 }' 00:10:48.930 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:48.930 { 00:10:48.930 "nqn": "nqn.2016-06.io.spdk:cnode22262", 00:10:48.930 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:48.930 "method": "nvmf_create_subsystem", 00:10:48.930 "req_id": 1 00:10:48.930 } 00:10:48.930 Got JSON-RPC error response 00:10:48.930 response: 00:10:48.930 { 00:10:48.930 "code": -32602, 00:10:48.930 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:48.930 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:48.930 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:48.930 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14036 00:10:49.190 [2024-07-15 15:15:58.557080] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14036: invalid model number 'SPDK_Controller' 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:49.190 { 00:10:49.190 "nqn": "nqn.2016-06.io.spdk:cnode14036", 00:10:49.190 "model_number": "SPDK_Controller\u001f", 00:10:49.190 "method": "nvmf_create_subsystem", 00:10:49.190 "req_id": 1 00:10:49.190 } 00:10:49.190 Got JSON-RPC error response 00:10:49.190 response: 00:10:49.190 { 00:10:49.190 "code": -32602, 00:10:49.190 "message": "Invalid MN SPDK_Controller\u001f" 00:10:49.190 }' 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:49.190 { 00:10:49.190 "nqn": "nqn.2016-06.io.spdk:cnode14036", 00:10:49.190 "model_number": "SPDK_Controller\u001f", 00:10:49.190 "method": "nvmf_create_subsystem", 00:10:49.190 "req_id": 1 00:10:49.190 } 00:10:49.190 Got JSON-RPC error response 00:10:49.190 response: 00:10:49.190 { 00:10:49.190 "code": -32602, 00:10:49.190 "message": "Invalid MN SPDK_Controller\u001f" 00:10:49.190 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.190 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ h == \- ]] 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'h299~PIQz'\''1_'\''|#xenL' 00:10:49.191 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'h299~PIQz'\''1_'\''|#xenL' nqn.2016-06.io.spdk:cnode4988 00:10:49.451 [2024-07-15 15:15:58.898150] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4988: invalid serial number 'h299~PIQz'1_'|#xenL' 00:10:49.451 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:49.451 { 00:10:49.451 "nqn": "nqn.2016-06.io.spdk:cnode4988", 00:10:49.451 "serial_number": "h\u007f299~PIQz'\''1_\u007f'\''|#xenL", 00:10:49.451 "method": "nvmf_create_subsystem", 00:10:49.451 "req_id": 1 00:10:49.451 } 00:10:49.451 Got JSON-RPC error response 00:10:49.451 response: 00:10:49.451 { 00:10:49.451 "code": -32602, 00:10:49.451 "message": "Invalid SN h\u007f299~PIQz'\''1_\u007f'\''|#xenL" 00:10:49.451 }' 00:10:49.451 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:49.451 { 00:10:49.451 "nqn": "nqn.2016-06.io.spdk:cnode4988", 00:10:49.451 "serial_number": "h\u007f299~PIQz'1_\u007f'|#xenL", 00:10:49.452 "method": "nvmf_create_subsystem", 00:10:49.452 "req_id": 1 00:10:49.452 } 00:10:49.452 Got JSON-RPC error response 00:10:49.452 response: 00:10:49.452 { 00:10:49.452 "code": -32602, 00:10:49.452 "message": "Invalid SN h\u007f299~PIQz'1_\u007f'|#xenL" 00:10:49.452 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:49.452 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:49.713 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ' == \- ]] 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ''\'']6=2JkU5&ZS<%y$PkW?%"debs9;"W+\l.X~>(M5{' 00:10:49.714 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ''\'']6=2JkU5&ZS<%y$PkW?%"debs9;"W+\l.X~>(M5{' nqn.2016-06.io.spdk:cnode30945 00:10:49.973 [2024-07-15 15:15:59.375711] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30945: invalid model number '']6=2JkU5&ZS<%y$PkW?%"debs9;"W+\l.X~>(M5{' 00:10:49.973 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:49.973 { 00:10:49.973 "nqn": "nqn.2016-06.io.spdk:cnode30945", 00:10:49.973 "model_number": "'\'']6=2JkU5&ZS<%y$PkW?%\"debs9;\"W+\\l.X~>(M5{", 00:10:49.973 "method": "nvmf_create_subsystem", 00:10:49.973 "req_id": 1 00:10:49.973 } 00:10:49.973 Got JSON-RPC error response 00:10:49.973 response: 00:10:49.973 { 00:10:49.973 "code": -32602, 00:10:49.973 "message": "Invalid MN '\'']6=2JkU5&ZS<%y$PkW?%\"debs9;\"W+\\l.X~>(M5{" 00:10:49.973 }' 00:10:49.973 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:49.973 { 00:10:49.973 "nqn": "nqn.2016-06.io.spdk:cnode30945", 00:10:49.973 "model_number": "']6=2JkU5&ZS<%y$PkW?%\"debs9;\"W+\\l.X~>(M5{", 00:10:49.973 "method": "nvmf_create_subsystem", 00:10:49.973 "req_id": 1 00:10:49.973 } 00:10:49.973 Got JSON-RPC error response 00:10:49.973 response: 00:10:49.973 { 00:10:49.973 "code": -32602, 00:10:49.973 "message": "Invalid MN ']6=2JkU5&ZS<%y$PkW?%\"debs9;\"W+\\l.X~>(M5{" 00:10:49.973 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:49.973 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:49.973 [2024-07-15 15:15:59.544372] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.973 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:50.234 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:50.234 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:50.234 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:50.234 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:50.234 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:50.494 [2024-07-15 15:15:59.897496] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:50.494 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:50.494 { 00:10:50.494 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:50.494 "listen_address": { 00:10:50.494 "trtype": "tcp", 00:10:50.494 "traddr": "", 00:10:50.494 "trsvcid": "4421" 00:10:50.494 }, 00:10:50.494 "method": "nvmf_subsystem_remove_listener", 00:10:50.494 "req_id": 1 00:10:50.494 } 00:10:50.494 Got JSON-RPC error response 00:10:50.494 response: 00:10:50.494 { 00:10:50.494 "code": -32602, 00:10:50.494 "message": "Invalid parameters" 00:10:50.494 }' 00:10:50.494 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:50.494 { 00:10:50.494 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:50.494 "listen_address": { 00:10:50.494 "trtype": "tcp", 00:10:50.494 "traddr": "", 00:10:50.494 "trsvcid": "4421" 00:10:50.494 }, 00:10:50.494 "method": "nvmf_subsystem_remove_listener", 00:10:50.494 "req_id": 1 00:10:50.494 } 00:10:50.494 Got JSON-RPC error response 00:10:50.494 response: 00:10:50.494 { 00:10:50.494 "code": -32602, 00:10:50.494 "message": "Invalid parameters" 00:10:50.494 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:50.494 15:15:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2025 -i 0 00:10:50.494 [2024-07-15 15:16:00.074055] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2025: invalid cntlid range [0-65519] 00:10:50.494 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:50.494 { 00:10:50.494 "nqn": "nqn.2016-06.io.spdk:cnode2025", 00:10:50.494 "min_cntlid": 0, 00:10:50.494 "method": "nvmf_create_subsystem", 00:10:50.494 "req_id": 1 00:10:50.494 } 00:10:50.494 Got JSON-RPC error response 00:10:50.494 response: 00:10:50.494 { 00:10:50.494 "code": -32602, 00:10:50.494 "message": "Invalid cntlid range [0-65519]" 00:10:50.494 }' 00:10:50.494 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:50.494 { 00:10:50.494 "nqn": "nqn.2016-06.io.spdk:cnode2025", 00:10:50.494 "min_cntlid": 0, 00:10:50.494 "method": "nvmf_create_subsystem", 00:10:50.494 "req_id": 1 00:10:50.494 } 00:10:50.494 Got JSON-RPC error response 00:10:50.494 response: 00:10:50.494 { 00:10:50.494 "code": -32602, 00:10:50.494 "message": "Invalid cntlid range [0-65519]" 00:10:50.494 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:50.494 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12309 -i 65520 00:10:50.754 [2024-07-15 15:16:00.246607] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12309: invalid cntlid range [65520-65519] 00:10:50.754 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:50.754 { 00:10:50.754 "nqn": "nqn.2016-06.io.spdk:cnode12309", 00:10:50.754 "min_cntlid": 65520, 00:10:50.754 "method": "nvmf_create_subsystem", 00:10:50.754 "req_id": 1 00:10:50.754 } 00:10:50.754 Got JSON-RPC error response 00:10:50.754 response: 00:10:50.754 { 00:10:50.754 "code": -32602, 00:10:50.754 "message": "Invalid cntlid range [65520-65519]" 00:10:50.754 }' 00:10:50.754 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:50.754 { 00:10:50.754 "nqn": "nqn.2016-06.io.spdk:cnode12309", 00:10:50.754 "min_cntlid": 65520, 00:10:50.754 "method": "nvmf_create_subsystem", 00:10:50.754 "req_id": 1 00:10:50.754 } 00:10:50.754 Got JSON-RPC error response 00:10:50.754 response: 00:10:50.754 { 00:10:50.754 "code": -32602, 00:10:50.754 "message": "Invalid cntlid range [65520-65519]" 00:10:50.754 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:50.754 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5046 -I 0 00:10:51.015 [2024-07-15 15:16:00.423204] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5046: invalid cntlid range [1-0] 00:10:51.015 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:51.015 { 00:10:51.015 "nqn": "nqn.2016-06.io.spdk:cnode5046", 00:10:51.015 "max_cntlid": 0, 00:10:51.015 "method": "nvmf_create_subsystem", 00:10:51.015 "req_id": 1 00:10:51.015 } 00:10:51.015 Got JSON-RPC error response 00:10:51.015 response: 00:10:51.015 { 00:10:51.015 "code": -32602, 00:10:51.015 "message": "Invalid cntlid range [1-0]" 00:10:51.015 }' 00:10:51.015 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:51.015 { 00:10:51.015 "nqn": "nqn.2016-06.io.spdk:cnode5046", 00:10:51.015 "max_cntlid": 0, 00:10:51.015 "method": "nvmf_create_subsystem", 00:10:51.015 "req_id": 1 00:10:51.015 } 00:10:51.015 Got JSON-RPC error response 00:10:51.015 response: 00:10:51.015 { 00:10:51.015 "code": -32602, 00:10:51.015 "message": "Invalid cntlid range [1-0]" 00:10:51.015 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:51.015 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21037 -I 65520 00:10:51.015 [2024-07-15 15:16:00.595713] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21037: invalid cntlid range [1-65520] 00:10:51.015 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:51.015 { 00:10:51.015 "nqn": "nqn.2016-06.io.spdk:cnode21037", 00:10:51.015 "max_cntlid": 65520, 00:10:51.015 "method": "nvmf_create_subsystem", 00:10:51.015 "req_id": 1 00:10:51.015 } 00:10:51.015 Got JSON-RPC error response 00:10:51.015 response: 00:10:51.015 { 00:10:51.015 "code": -32602, 00:10:51.015 "message": "Invalid cntlid range [1-65520]" 00:10:51.015 }' 00:10:51.015 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:51.015 { 00:10:51.015 "nqn": "nqn.2016-06.io.spdk:cnode21037", 00:10:51.015 "max_cntlid": 65520, 00:10:51.015 "method": "nvmf_create_subsystem", 00:10:51.015 "req_id": 1 00:10:51.015 } 00:10:51.015 Got JSON-RPC error response 00:10:51.015 response: 00:10:51.015 { 00:10:51.015 "code": -32602, 00:10:51.015 "message": "Invalid cntlid range [1-65520]" 00:10:51.015 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:51.015 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1651 -i 6 -I 5 00:10:51.275 [2024-07-15 15:16:00.768273] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1651: invalid cntlid range [6-5] 00:10:51.275 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:51.275 { 00:10:51.275 "nqn": "nqn.2016-06.io.spdk:cnode1651", 00:10:51.275 "min_cntlid": 6, 00:10:51.275 "max_cntlid": 5, 00:10:51.275 "method": "nvmf_create_subsystem", 00:10:51.275 "req_id": 1 00:10:51.275 } 00:10:51.275 Got JSON-RPC error response 00:10:51.275 response: 00:10:51.275 { 00:10:51.276 "code": -32602, 00:10:51.276 "message": "Invalid cntlid range [6-5]" 00:10:51.276 }' 00:10:51.276 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:51.276 { 00:10:51.276 "nqn": "nqn.2016-06.io.spdk:cnode1651", 00:10:51.276 "min_cntlid": 6, 00:10:51.276 "max_cntlid": 5, 00:10:51.276 "method": "nvmf_create_subsystem", 00:10:51.276 "req_id": 1 00:10:51.276 } 00:10:51.276 Got JSON-RPC error response 00:10:51.276 response: 00:10:51.276 { 00:10:51.276 "code": -32602, 00:10:51.276 "message": "Invalid cntlid range [6-5]" 00:10:51.276 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:51.276 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:51.535 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:51.535 { 00:10:51.535 "name": "foobar", 00:10:51.535 "method": "nvmf_delete_target", 00:10:51.535 "req_id": 1 00:10:51.535 } 00:10:51.535 Got JSON-RPC error response 00:10:51.535 response: 00:10:51.535 { 00:10:51.535 "code": -32602, 00:10:51.535 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:51.535 }' 00:10:51.535 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:51.535 { 00:10:51.535 "name": "foobar", 00:10:51.535 "method": "nvmf_delete_target", 00:10:51.535 "req_id": 1 00:10:51.535 } 00:10:51.535 Got JSON-RPC error response 00:10:51.535 response: 00:10:51.535 { 00:10:51.535 "code": -32602, 00:10:51.535 "message": "The specified target doesn't exist, cannot delete it." 00:10:51.535 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:51.535 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:51.535 15:16:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:51.535 15:16:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:51.535 15:16:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:51.535 15:16:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:51.535 15:16:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:51.535 15:16:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:51.535 15:16:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:51.535 rmmod nvme_tcp 00:10:51.535 rmmod nvme_fabrics 00:10:51.535 rmmod nvme_keyring 00:10:51.535 15:16:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:51.536 15:16:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:51.536 15:16:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:51.536 15:16:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 556042 ']' 00:10:51.536 15:16:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 556042 00:10:51.536 15:16:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 556042 ']' 00:10:51.536 15:16:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 556042 00:10:51.536 15:16:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:10:51.536 15:16:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:51.536 15:16:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 556042 00:10:51.536 15:16:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:51.536 15:16:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:51.536 15:16:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 556042' 00:10:51.536 killing process with pid 556042 00:10:51.536 15:16:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 556042 00:10:51.536 15:16:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 556042 00:10:51.536 15:16:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:51.536 15:16:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:51.536 15:16:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:51.536 15:16:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:51.536 15:16:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:51.536 15:16:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.536 15:16:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.536 15:16:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.078 15:16:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:54.078 00:10:54.078 real 0m13.888s 00:10:54.078 user 0m19.429s 00:10:54.078 sys 0m6.582s 00:10:54.078 15:16:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.078 15:16:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:54.078 ************************************ 00:10:54.078 END TEST nvmf_invalid 00:10:54.078 ************************************ 00:10:54.078 15:16:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:54.078 15:16:03 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:54.078 15:16:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:54.078 15:16:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.078 15:16:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:54.078 ************************************ 00:10:54.078 START TEST nvmf_abort 00:10:54.078 ************************************ 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:54.078 * Looking for test storage... 00:10:54.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:54.078 15:16:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.211 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:02.212 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:02.212 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:02.212 Found net devices under 0000:31:00.0: cvl_0_0 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:02.212 Found net devices under 0000:31:00.1: cvl_0_1 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.212 15:16:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:02.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:11:02.212 00:11:02.212 --- 10.0.0.2 ping statistics --- 00:11:02.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.212 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:11:02.212 00:11:02.212 --- 10.0.0.1 ping statistics --- 00:11:02.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.212 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=561442 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 561442 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 561442 ']' 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:02.212 15:16:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.212 [2024-07-15 15:16:11.266155] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:11:02.212 [2024-07-15 15:16:11.266213] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.212 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.212 [2024-07-15 15:16:11.340541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:02.212 [2024-07-15 15:16:11.414562] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.212 [2024-07-15 15:16:11.414600] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.212 [2024-07-15 15:16:11.414608] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.212 [2024-07-15 15:16:11.414614] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.212 [2024-07-15 15:16:11.414620] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.212 [2024-07-15 15:16:11.414723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.212 [2024-07-15 15:16:11.414881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.212 [2024-07-15 15:16:11.414882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.473 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:02.473 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:11:02.473 15:16:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:02.473 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:02.473 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.473 15:16:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.473 15:16:12 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:02.473 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.473 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.733 [2024-07-15 15:16:12.098120] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.733 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.733 15:16:12 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:02.733 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.733 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.733 Malloc0 00:11:02.733 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.733 15:16:12 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:02.733 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.733 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.733 Delay0 00:11:02.733 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.733 15:16:12 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:02.733 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.733 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.734 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.734 15:16:12 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:02.734 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.734 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.734 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.734 15:16:12 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:02.734 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.734 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.734 [2024-07-15 15:16:12.178492] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.734 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.734 15:16:12 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:02.734 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.734 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.734 15:16:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.734 15:16:12 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:02.734 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.734 [2024-07-15 15:16:12.299276] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:05.274 Initializing NVMe Controllers 00:11:05.274 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:05.274 controller IO queue size 128 less than required 00:11:05.274 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:05.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:05.274 Initialization complete. Launching workers. 00:11:05.274 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34656 00:11:05.274 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34717, failed to submit 62 00:11:05.274 success 34660, unsuccess 57, failed 0 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:05.274 rmmod nvme_tcp 00:11:05.274 rmmod nvme_fabrics 00:11:05.274 rmmod nvme_keyring 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 561442 ']' 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 561442 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 561442 ']' 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 561442 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 561442 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 561442' 00:11:05.274 killing process with pid 561442 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 561442 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 561442 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.274 15:16:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.245 15:16:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:07.245 00:11:07.245 real 0m13.375s 00:11:07.245 user 0m13.553s 00:11:07.245 sys 0m6.524s 00:11:07.245 15:16:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:07.245 15:16:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:07.245 ************************************ 00:11:07.245 END TEST nvmf_abort 00:11:07.245 ************************************ 00:11:07.245 15:16:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:07.245 15:16:16 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:07.245 15:16:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:07.245 15:16:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.245 15:16:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:07.245 ************************************ 00:11:07.245 START TEST nvmf_ns_hotplug_stress 00:11:07.245 ************************************ 00:11:07.245 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:07.505 * Looking for test storage... 00:11:07.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.505 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:07.506 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:07.506 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:07.506 15:16:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:15.641 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:15.641 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:15.641 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:15.642 Found net devices under 0000:31:00.0: cvl_0_0 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:15.642 Found net devices under 0000:31:00.1: cvl_0_1 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:15.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:11:15.642 00:11:15.642 --- 10.0.0.2 ping statistics --- 00:11:15.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.642 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:11:15.642 00:11:15.642 --- 10.0.0.1 ping statistics --- 00:11:15.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.642 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=566636 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 566636 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 566636 ']' 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.642 15:16:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.642 [2024-07-15 15:16:24.726809] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:11:15.642 [2024-07-15 15:16:24.726870] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.642 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.642 [2024-07-15 15:16:24.802794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:15.642 [2024-07-15 15:16:24.875926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.642 [2024-07-15 15:16:24.875965] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.642 [2024-07-15 15:16:24.875973] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.642 [2024-07-15 15:16:24.875979] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.642 [2024-07-15 15:16:24.875984] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.642 [2024-07-15 15:16:24.876096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.642 [2024-07-15 15:16:24.876251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.642 [2024-07-15 15:16:24.876252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.902 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:15.902 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:11:15.902 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:15.902 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:15.902 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.162 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.162 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:16.162 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:16.162 [2024-07-15 15:16:25.696265] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.162 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:16.420 15:16:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.420 [2024-07-15 15:16:26.033730] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.679 15:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:16.679 15:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:16.938 Malloc0 00:11:16.938 15:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:17.198 Delay0 00:11:17.198 15:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.198 15:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:17.457 NULL1 00:11:17.457 15:16:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:17.717 15:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:17.717 15:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=567054 00:11:17.717 15:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:17.717 15:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.717 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.717 15:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.977 15:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:17.977 15:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:17.977 [2024-07-15 15:16:27.549926] bdev.c:5033:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:11:17.977 true 00:11:17.977 15:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:17.977 15:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.237 15:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.496 15:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:18.496 15:16:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:18.496 true 00:11:18.496 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:18.496 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.756 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.024 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:19.024 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:19.024 true 00:11:19.024 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:19.024 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.284 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.546 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:19.546 15:16:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:19.546 true 00:11:19.546 15:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:19.546 15:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.807 15:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.807 15:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:19.807 15:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:20.067 true 00:11:20.067 15:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:20.068 15:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.328 15:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.328 15:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:20.328 15:16:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:20.589 true 00:11:20.589 15:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:20.589 15:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.849 15:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.849 15:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:20.849 15:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:21.110 true 00:11:21.110 15:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:21.110 15:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.370 15:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.370 15:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:21.370 15:16:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:21.631 true 00:11:21.631 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:21.631 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.631 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.892 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:21.892 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:22.153 true 00:11:22.153 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:22.153 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.153 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.413 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:22.413 15:16:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:22.673 true 00:11:22.673 15:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:22.673 15:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.673 15:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.933 15:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:22.933 15:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:23.193 true 00:11:23.193 15:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:23.193 15:16:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.132 Read completed with error (sct=0, sc=11) 00:11:24.132 15:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.132 15:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:24.132 15:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:24.132 true 00:11:24.132 15:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:24.132 15:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.392 15:16:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.651 15:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:24.651 15:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:24.651 true 00:11:24.651 15:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:24.651 15:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.912 15:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.173 15:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:25.173 15:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:25.173 true 00:11:25.173 15:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:25.173 15:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.433 15:16:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.693 15:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:25.693 15:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:25.693 true 00:11:25.693 15:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:25.693 15:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.977 15:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.977 15:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:25.977 15:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:26.238 true 00:11:26.238 15:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:26.239 15:16:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.258 15:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.258 15:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:27.258 15:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:27.518 true 00:11:27.518 15:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:27.518 15:16:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.518 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.779 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:27.779 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:27.779 true 00:11:28.040 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:28.040 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.040 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.315 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:28.315 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:28.315 true 00:11:28.315 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:28.315 15:16:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.576 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.837 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:28.837 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:28.837 true 00:11:28.837 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:28.837 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.097 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.356 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:29.357 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:29.357 true 00:11:29.357 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:29.357 15:16:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.617 15:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.617 15:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:29.617 15:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:29.878 true 00:11:29.878 15:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:29.878 15:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.138 15:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.138 15:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:30.138 15:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:30.397 true 00:11:30.397 15:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:30.397 15:16:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.339 15:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.339 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:31.599 15:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:31.599 15:16:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:31.599 true 00:11:31.599 15:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:31.599 15:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.541 15:16:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.541 15:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:32.541 15:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:32.802 true 00:11:32.802 15:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:32.802 15:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.062 15:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.062 15:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:33.062 15:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:33.322 true 00:11:33.322 15:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:33.323 15:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.323 15:16:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.583 15:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:33.583 15:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:33.843 true 00:11:33.843 15:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:33.843 15:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.843 15:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.103 15:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:34.103 15:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:34.364 true 00:11:34.364 15:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:34.364 15:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.364 15:16:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.625 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:34.625 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:34.885 true 00:11:34.885 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:34.885 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.885 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.146 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:35.146 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:35.146 true 00:11:35.407 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:35.407 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.407 15:16:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.667 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:11:35.667 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:35.667 true 00:11:35.928 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:35.928 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.928 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.188 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:11:36.189 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:11:36.189 true 00:11:36.189 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:36.189 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.449 15:16:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.710 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:11:36.710 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:11:36.710 true 00:11:36.710 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:36.710 15:16:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:37.653 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:37.914 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:11:37.914 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:11:37.914 true 00:11:38.175 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:38.175 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.175 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.436 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:11:38.436 15:16:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:11:38.436 true 00:11:38.436 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:38.436 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.696 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.956 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:11:38.956 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:11:38.956 true 00:11:38.956 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:38.956 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.216 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.477 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:11:39.477 15:16:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:11:39.477 true 00:11:39.477 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:39.477 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.738 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.999 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:11:39.999 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:11:39.999 true 00:11:39.999 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:39.999 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.260 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.520 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:11:40.520 15:16:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:11:40.520 true 00:11:40.520 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:40.520 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.781 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.781 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:11:40.781 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:11:41.042 true 00:11:41.042 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:41.042 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.330 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.330 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:11:41.330 15:16:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:11:41.591 true 00:11:41.591 15:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:41.591 15:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.851 15:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.851 15:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:11:41.851 15:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:11:42.111 true 00:11:42.111 15:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:42.111 15:16:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.050 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.050 15:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.050 15:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:11:43.050 15:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:11:43.310 true 00:11:43.310 15:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:43.310 15:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.570 15:16:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.570 15:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:11:43.570 15:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:11:43.829 true 00:11:43.829 15:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:43.829 15:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.088 15:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.088 15:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:11:44.088 15:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:11:44.347 true 00:11:44.347 15:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:44.347 15:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.605 15:16:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.605 15:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:11:44.605 15:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:11:44.864 true 00:11:44.865 15:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:44.865 15:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.865 15:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.124 15:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:11:45.124 15:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:11:45.382 true 00:11:45.382 15:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:45.382 15:16:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.369 15:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:46.369 15:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:11:46.369 15:16:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:11:46.628 true 00:11:46.628 15:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:46.628 15:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.628 15:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.886 15:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:11:46.886 15:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:11:47.146 true 00:11:47.146 15:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:47.146 15:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.146 15:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.405 15:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:11:47.405 15:16:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:11:47.405 true 00:11:47.665 15:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:47.665 15:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.665 15:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.925 Initializing NVMe Controllers 00:11:47.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:47.925 Controller IO queue size 128, less than required. 00:11:47.925 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:47.925 Controller IO queue size 128, less than required. 00:11:47.925 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:47.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:47.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:47.925 Initialization complete. Launching workers. 00:11:47.925 ======================================================== 00:11:47.925 Latency(us) 00:11:47.925 Device Information : IOPS MiB/s Average min max 00:11:47.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 384.05 0.19 75753.63 2564.00 1104789.80 00:11:47.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7138.86 3.49 17870.59 1504.46 406172.12 00:11:47.925 ======================================================== 00:11:47.925 Total : 7522.91 3.67 20825.54 1504.46 1104789.80 00:11:47.925 00:11:47.925 15:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:11:47.925 15:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:11:47.925 true 00:11:47.925 15:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 567054 00:11:47.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (567054) - No such process 00:11:47.925 15:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 567054 00:11:47.925 15:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.184 15:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:48.444 15:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:48.444 15:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:48.444 15:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:48.444 15:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:48.444 15:16:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:48.444 null0 00:11:48.444 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:48.444 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:48.444 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:48.704 null1 00:11:48.704 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:48.704 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:48.704 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:48.963 null2 00:11:48.963 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:48.963 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:48.963 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:48.963 null3 00:11:48.963 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:48.963 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:48.963 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:49.223 null4 00:11:49.223 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:49.223 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:49.223 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:49.223 null5 00:11:49.223 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:49.223 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:49.223 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:49.483 null6 00:11:49.483 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:49.483 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:49.483 15:16:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:49.744 null7 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 573565 573566 573568 573570 573572 573574 573576 573578 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:49.744 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.745 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.005 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:50.266 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.266 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.266 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:50.266 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.266 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:50.266 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:50.266 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:50.266 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:50.266 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:50.266 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:50.266 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:50.266 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.266 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.267 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:50.267 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.267 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.267 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:50.267 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.267 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.267 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:50.527 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.527 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.527 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:50.527 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.527 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.527 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:50.527 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.527 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.527 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:50.527 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.527 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.527 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:50.527 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.527 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.527 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:50.527 15:16:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.527 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:50.527 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:50.527 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:50.527 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:50.527 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:50.527 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:50.788 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.048 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.309 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:51.569 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.569 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.569 15:17:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:51.569 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.569 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:51.569 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:51.569 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:51.569 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:51.569 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:51.569 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:51.569 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:51.829 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:51.830 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:52.090 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:52.090 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.090 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.090 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:52.090 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.090 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.090 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:52.090 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.090 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.090 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:52.090 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.090 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.090 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:52.091 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.091 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.091 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:52.091 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.091 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.091 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:52.091 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.091 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.091 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:52.091 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.091 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.091 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:52.091 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.351 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:52.612 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.612 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.612 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:52.612 15:17:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.612 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:52.871 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:53.131 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.131 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.131 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:53.131 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.131 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.131 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:53.131 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.131 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.131 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.131 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.131 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.131 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.131 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.131 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.131 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.131 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:53.391 rmmod nvme_tcp 00:11:53.391 rmmod nvme_fabrics 00:11:53.391 rmmod nvme_keyring 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 566636 ']' 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 566636 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 566636 ']' 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 566636 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 566636 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 566636' 00:11:53.391 killing process with pid 566636 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 566636 00:11:53.391 15:17:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 566636 00:11:53.391 15:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:53.391 15:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:53.391 15:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:53.391 15:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:53.391 15:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:53.391 15:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.391 15:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:53.391 15:17:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.934 15:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:55.934 00:11:55.934 real 0m48.319s 00:11:55.934 user 3m11.654s 00:11:55.934 sys 0m15.660s 00:11:55.934 15:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.934 15:17:05 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.934 ************************************ 00:11:55.934 END TEST nvmf_ns_hotplug_stress 00:11:55.934 ************************************ 00:11:55.934 15:17:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:55.934 15:17:05 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:55.934 15:17:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:55.934 15:17:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.934 15:17:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:55.934 ************************************ 00:11:55.934 START TEST nvmf_connect_stress 00:11:55.934 ************************************ 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:55.934 * Looking for test storage... 00:11:55.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:55.934 15:17:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:04.142 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:04.142 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.142 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:04.143 Found net devices under 0000:31:00.0: cvl_0_0 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:04.143 Found net devices under 0000:31:00.1: cvl_0_1 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:04.143 15:17:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:04.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:12:04.143 00:12:04.143 --- 10.0.0.2 ping statistics --- 00:12:04.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.143 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:12:04.143 00:12:04.143 --- 10.0.0.1 ping statistics --- 00:12:04.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.143 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=578955 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 578955 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 578955 ']' 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:04.143 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.143 [2024-07-15 15:17:13.154877] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:12:04.143 [2024-07-15 15:17:13.154929] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.143 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.143 [2024-07-15 15:17:13.225661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:04.143 [2024-07-15 15:17:13.289626] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.143 [2024-07-15 15:17:13.289660] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.143 [2024-07-15 15:17:13.289667] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.143 [2024-07-15 15:17:13.289674] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.143 [2024-07-15 15:17:13.289679] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.143 [2024-07-15 15:17:13.289793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.143 [2024-07-15 15:17:13.289942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.143 [2024-07-15 15:17:13.290106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.403 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:04.403 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:12:04.403 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:04.403 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:04.403 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.404 15:17:13 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.404 15:17:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:04.404 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.404 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.404 [2024-07-15 15:17:13.977411] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.404 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.404 15:17:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:04.404 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.404 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.404 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.404 15:17:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.404 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.404 15:17:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.404 [2024-07-15 15:17:14.018046] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.404 15:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.404 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:04.404 15:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.404 15:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.664 NULL1 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=579196 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.664 15:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.924 15:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.924 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:04.924 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.924 15:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.924 15:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.184 15:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.184 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:05.184 15:17:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.184 15:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.184 15:17:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.754 15:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.754 15:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:05.754 15:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.754 15:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.754 15:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.015 15:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.015 15:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:06.015 15:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.015 15:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.015 15:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.275 15:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.275 15:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:06.275 15:17:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.275 15:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.275 15:17:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.537 15:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.537 15:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:06.537 15:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.537 15:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.537 15:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.797 15:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.797 15:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:06.797 15:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.797 15:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.797 15:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.366 15:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.366 15:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:07.366 15:17:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.366 15:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.366 15:17:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.626 15:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.626 15:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:07.626 15:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.626 15:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.626 15:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.886 15:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.886 15:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:07.886 15:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.886 15:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.886 15:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.146 15:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.146 15:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:08.146 15:17:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.146 15:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.146 15:17:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.714 15:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.714 15:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:08.715 15:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.715 15:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.715 15:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.974 15:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.974 15:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:08.974 15:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.974 15:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.974 15:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.234 15:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.234 15:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:09.235 15:17:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.235 15:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.235 15:17:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.494 15:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.494 15:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:09.494 15:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.494 15:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.494 15:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.753 15:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.753 15:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:09.753 15:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.753 15:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.753 15:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.322 15:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.322 15:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:10.322 15:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.322 15:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.322 15:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.582 15:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.582 15:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:10.582 15:17:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.582 15:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.582 15:17:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.841 15:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.841 15:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:10.841 15:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.841 15:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.841 15:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.101 15:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.101 15:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:11.101 15:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.101 15:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.101 15:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.361 15:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.361 15:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:11.361 15:17:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.361 15:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.361 15:17:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.931 15:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.931 15:17:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:11.931 15:17:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.931 15:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.931 15:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.191 15:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.191 15:17:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:12.191 15:17:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.191 15:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.191 15:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.450 15:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.450 15:17:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:12.450 15:17:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.450 15:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.450 15:17:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.710 15:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.710 15:17:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:12.710 15:17:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.710 15:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.710 15:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.278 15:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.278 15:17:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:13.278 15:17:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.278 15:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.278 15:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.537 15:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.537 15:17:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:13.537 15:17:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.537 15:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.537 15:17:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.798 15:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.798 15:17:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:13.798 15:17:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.798 15:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.798 15:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.058 15:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.058 15:17:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:14.058 15:17:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.058 15:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.058 15:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.318 15:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.318 15:17:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:14.318 15:17:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.318 15:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.318 15:17:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.578 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 579196 00:12:14.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (579196) - No such process 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 579196 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:14.838 rmmod nvme_tcp 00:12:14.838 rmmod nvme_fabrics 00:12:14.838 rmmod nvme_keyring 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 578955 ']' 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 578955 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 578955 ']' 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 578955 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 578955 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 578955' 00:12:14.838 killing process with pid 578955 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 578955 00:12:14.838 15:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 578955 00:12:15.117 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:15.117 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:15.117 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:15.117 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:15.117 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:15.117 15:17:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.117 15:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.117 15:17:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.047 15:17:26 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:17.047 00:12:17.047 real 0m21.388s 00:12:17.047 user 0m42.487s 00:12:17.047 sys 0m8.938s 00:12:17.047 15:17:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:17.047 15:17:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.047 ************************************ 00:12:17.047 END TEST nvmf_connect_stress 00:12:17.047 ************************************ 00:12:17.047 15:17:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:17.047 15:17:26 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:17.047 15:17:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:17.047 15:17:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.047 15:17:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:17.047 ************************************ 00:12:17.047 START TEST nvmf_fused_ordering 00:12:17.047 ************************************ 00:12:17.047 15:17:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:17.307 * Looking for test storage... 00:12:17.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:12:17.307 15:17:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:25.439 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:25.439 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:25.439 Found net devices under 0000:31:00.0: cvl_0_0 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:25.439 Found net devices under 0000:31:00.1: cvl_0_1 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:25.439 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:25.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.523 ms 00:12:25.440 00:12:25.440 --- 10.0.0.2 ping statistics --- 00:12:25.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.440 rtt min/avg/max/mdev = 0.523/0.523/0.523/0.000 ms 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:12:25.440 00:12:25.440 --- 10.0.0.1 ping statistics --- 00:12:25.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.440 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=585691 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 585691 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 585691 ']' 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:25.440 15:17:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:25.440 [2024-07-15 15:17:34.514019] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:12:25.440 [2024-07-15 15:17:34.514081] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.440 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.440 [2024-07-15 15:17:34.593106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.440 [2024-07-15 15:17:34.666661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.440 [2024-07-15 15:17:34.666700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.440 [2024-07-15 15:17:34.666707] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.440 [2024-07-15 15:17:34.666714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.440 [2024-07-15 15:17:34.666719] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.440 [2024-07-15 15:17:34.666747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.699 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:25.699 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:12:25.699 15:17:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:25.699 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:25.699 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:25.958 15:17:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.958 15:17:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:25.958 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.958 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:25.958 [2024-07-15 15:17:35.325871] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.958 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.958 15:17:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:25.958 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.958 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:25.959 [2024-07-15 15:17:35.350056] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:25.959 NULL1 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.959 15:17:35 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:25.959 [2024-07-15 15:17:35.414898] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:12:25.959 [2024-07-15 15:17:35.414963] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid585915 ] 00:12:25.959 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.230 Attached to nqn.2016-06.io.spdk:cnode1 00:12:26.230 Namespace ID: 1 size: 1GB 00:12:26.230 fused_ordering(0) 00:12:26.230 fused_ordering(1) 00:12:26.230 fused_ordering(2) 00:12:26.230 fused_ordering(3) 00:12:26.230 fused_ordering(4) 00:12:26.230 fused_ordering(5) 00:12:26.230 fused_ordering(6) 00:12:26.230 fused_ordering(7) 00:12:26.230 fused_ordering(8) 00:12:26.230 fused_ordering(9) 00:12:26.230 fused_ordering(10) 00:12:26.230 fused_ordering(11) 00:12:26.230 fused_ordering(12) 00:12:26.230 fused_ordering(13) 00:12:26.230 fused_ordering(14) 00:12:26.230 fused_ordering(15) 00:12:26.230 fused_ordering(16) 00:12:26.230 fused_ordering(17) 00:12:26.230 fused_ordering(18) 00:12:26.230 fused_ordering(19) 00:12:26.230 fused_ordering(20) 00:12:26.230 fused_ordering(21) 00:12:26.230 fused_ordering(22) 00:12:26.230 fused_ordering(23) 00:12:26.230 fused_ordering(24) 00:12:26.230 fused_ordering(25) 00:12:26.230 fused_ordering(26) 00:12:26.230 fused_ordering(27) 00:12:26.230 fused_ordering(28) 00:12:26.230 fused_ordering(29) 00:12:26.230 fused_ordering(30) 00:12:26.230 fused_ordering(31) 00:12:26.230 fused_ordering(32) 00:12:26.230 fused_ordering(33) 00:12:26.230 fused_ordering(34) 00:12:26.230 fused_ordering(35) 00:12:26.230 fused_ordering(36) 00:12:26.230 fused_ordering(37) 00:12:26.230 fused_ordering(38) 00:12:26.230 fused_ordering(39) 00:12:26.230 fused_ordering(40) 00:12:26.230 fused_ordering(41) 00:12:26.230 fused_ordering(42) 00:12:26.230 fused_ordering(43) 00:12:26.230 fused_ordering(44) 00:12:26.230 fused_ordering(45) 00:12:26.230 fused_ordering(46) 00:12:26.230 fused_ordering(47) 00:12:26.231 fused_ordering(48) 00:12:26.231 fused_ordering(49) 00:12:26.231 fused_ordering(50) 00:12:26.231 fused_ordering(51) 00:12:26.231 fused_ordering(52) 00:12:26.231 fused_ordering(53) 00:12:26.231 fused_ordering(54) 00:12:26.231 fused_ordering(55) 00:12:26.231 fused_ordering(56) 00:12:26.231 fused_ordering(57) 00:12:26.231 fused_ordering(58) 00:12:26.231 fused_ordering(59) 00:12:26.231 fused_ordering(60) 00:12:26.231 fused_ordering(61) 00:12:26.231 fused_ordering(62) 00:12:26.231 fused_ordering(63) 00:12:26.231 fused_ordering(64) 00:12:26.231 fused_ordering(65) 00:12:26.231 fused_ordering(66) 00:12:26.231 fused_ordering(67) 00:12:26.231 fused_ordering(68) 00:12:26.231 fused_ordering(69) 00:12:26.231 fused_ordering(70) 00:12:26.231 fused_ordering(71) 00:12:26.231 fused_ordering(72) 00:12:26.231 fused_ordering(73) 00:12:26.231 fused_ordering(74) 00:12:26.231 fused_ordering(75) 00:12:26.231 fused_ordering(76) 00:12:26.231 fused_ordering(77) 00:12:26.231 fused_ordering(78) 00:12:26.231 fused_ordering(79) 00:12:26.231 fused_ordering(80) 00:12:26.231 fused_ordering(81) 00:12:26.231 fused_ordering(82) 00:12:26.231 fused_ordering(83) 00:12:26.231 fused_ordering(84) 00:12:26.231 fused_ordering(85) 00:12:26.231 fused_ordering(86) 00:12:26.231 fused_ordering(87) 00:12:26.231 fused_ordering(88) 00:12:26.231 fused_ordering(89) 00:12:26.231 fused_ordering(90) 00:12:26.231 fused_ordering(91) 00:12:26.231 fused_ordering(92) 00:12:26.231 fused_ordering(93) 00:12:26.231 fused_ordering(94) 00:12:26.231 fused_ordering(95) 00:12:26.231 fused_ordering(96) 00:12:26.231 fused_ordering(97) 00:12:26.231 fused_ordering(98) 00:12:26.231 fused_ordering(99) 00:12:26.231 fused_ordering(100) 00:12:26.231 fused_ordering(101) 00:12:26.231 fused_ordering(102) 00:12:26.231 fused_ordering(103) 00:12:26.231 fused_ordering(104) 00:12:26.231 fused_ordering(105) 00:12:26.231 fused_ordering(106) 00:12:26.231 fused_ordering(107) 00:12:26.231 fused_ordering(108) 00:12:26.231 fused_ordering(109) 00:12:26.231 fused_ordering(110) 00:12:26.231 fused_ordering(111) 00:12:26.231 fused_ordering(112) 00:12:26.231 fused_ordering(113) 00:12:26.231 fused_ordering(114) 00:12:26.231 fused_ordering(115) 00:12:26.231 fused_ordering(116) 00:12:26.231 fused_ordering(117) 00:12:26.231 fused_ordering(118) 00:12:26.231 fused_ordering(119) 00:12:26.231 fused_ordering(120) 00:12:26.231 fused_ordering(121) 00:12:26.231 fused_ordering(122) 00:12:26.231 fused_ordering(123) 00:12:26.231 fused_ordering(124) 00:12:26.231 fused_ordering(125) 00:12:26.231 fused_ordering(126) 00:12:26.231 fused_ordering(127) 00:12:26.231 fused_ordering(128) 00:12:26.231 fused_ordering(129) 00:12:26.231 fused_ordering(130) 00:12:26.231 fused_ordering(131) 00:12:26.231 fused_ordering(132) 00:12:26.231 fused_ordering(133) 00:12:26.231 fused_ordering(134) 00:12:26.231 fused_ordering(135) 00:12:26.231 fused_ordering(136) 00:12:26.231 fused_ordering(137) 00:12:26.231 fused_ordering(138) 00:12:26.231 fused_ordering(139) 00:12:26.231 fused_ordering(140) 00:12:26.231 fused_ordering(141) 00:12:26.231 fused_ordering(142) 00:12:26.231 fused_ordering(143) 00:12:26.231 fused_ordering(144) 00:12:26.231 fused_ordering(145) 00:12:26.231 fused_ordering(146) 00:12:26.231 fused_ordering(147) 00:12:26.231 fused_ordering(148) 00:12:26.231 fused_ordering(149) 00:12:26.231 fused_ordering(150) 00:12:26.231 fused_ordering(151) 00:12:26.231 fused_ordering(152) 00:12:26.231 fused_ordering(153) 00:12:26.231 fused_ordering(154) 00:12:26.231 fused_ordering(155) 00:12:26.231 fused_ordering(156) 00:12:26.231 fused_ordering(157) 00:12:26.231 fused_ordering(158) 00:12:26.231 fused_ordering(159) 00:12:26.231 fused_ordering(160) 00:12:26.231 fused_ordering(161) 00:12:26.231 fused_ordering(162) 00:12:26.231 fused_ordering(163) 00:12:26.231 fused_ordering(164) 00:12:26.231 fused_ordering(165) 00:12:26.231 fused_ordering(166) 00:12:26.231 fused_ordering(167) 00:12:26.231 fused_ordering(168) 00:12:26.231 fused_ordering(169) 00:12:26.231 fused_ordering(170) 00:12:26.231 fused_ordering(171) 00:12:26.231 fused_ordering(172) 00:12:26.231 fused_ordering(173) 00:12:26.231 fused_ordering(174) 00:12:26.231 fused_ordering(175) 00:12:26.231 fused_ordering(176) 00:12:26.231 fused_ordering(177) 00:12:26.231 fused_ordering(178) 00:12:26.231 fused_ordering(179) 00:12:26.231 fused_ordering(180) 00:12:26.231 fused_ordering(181) 00:12:26.231 fused_ordering(182) 00:12:26.231 fused_ordering(183) 00:12:26.231 fused_ordering(184) 00:12:26.231 fused_ordering(185) 00:12:26.231 fused_ordering(186) 00:12:26.231 fused_ordering(187) 00:12:26.231 fused_ordering(188) 00:12:26.231 fused_ordering(189) 00:12:26.231 fused_ordering(190) 00:12:26.231 fused_ordering(191) 00:12:26.231 fused_ordering(192) 00:12:26.231 fused_ordering(193) 00:12:26.231 fused_ordering(194) 00:12:26.231 fused_ordering(195) 00:12:26.231 fused_ordering(196) 00:12:26.231 fused_ordering(197) 00:12:26.231 fused_ordering(198) 00:12:26.231 fused_ordering(199) 00:12:26.231 fused_ordering(200) 00:12:26.231 fused_ordering(201) 00:12:26.231 fused_ordering(202) 00:12:26.231 fused_ordering(203) 00:12:26.231 fused_ordering(204) 00:12:26.231 fused_ordering(205) 00:12:26.799 fused_ordering(206) 00:12:26.799 fused_ordering(207) 00:12:26.799 fused_ordering(208) 00:12:26.799 fused_ordering(209) 00:12:26.799 fused_ordering(210) 00:12:26.799 fused_ordering(211) 00:12:26.799 fused_ordering(212) 00:12:26.799 fused_ordering(213) 00:12:26.799 fused_ordering(214) 00:12:26.799 fused_ordering(215) 00:12:26.799 fused_ordering(216) 00:12:26.799 fused_ordering(217) 00:12:26.799 fused_ordering(218) 00:12:26.799 fused_ordering(219) 00:12:26.799 fused_ordering(220) 00:12:26.799 fused_ordering(221) 00:12:26.799 fused_ordering(222) 00:12:26.799 fused_ordering(223) 00:12:26.799 fused_ordering(224) 00:12:26.799 fused_ordering(225) 00:12:26.799 fused_ordering(226) 00:12:26.799 fused_ordering(227) 00:12:26.799 fused_ordering(228) 00:12:26.799 fused_ordering(229) 00:12:26.799 fused_ordering(230) 00:12:26.799 fused_ordering(231) 00:12:26.799 fused_ordering(232) 00:12:26.799 fused_ordering(233) 00:12:26.799 fused_ordering(234) 00:12:26.799 fused_ordering(235) 00:12:26.799 fused_ordering(236) 00:12:26.799 fused_ordering(237) 00:12:26.799 fused_ordering(238) 00:12:26.799 fused_ordering(239) 00:12:26.799 fused_ordering(240) 00:12:26.799 fused_ordering(241) 00:12:26.799 fused_ordering(242) 00:12:26.799 fused_ordering(243) 00:12:26.799 fused_ordering(244) 00:12:26.799 fused_ordering(245) 00:12:26.799 fused_ordering(246) 00:12:26.799 fused_ordering(247) 00:12:26.799 fused_ordering(248) 00:12:26.799 fused_ordering(249) 00:12:26.799 fused_ordering(250) 00:12:26.799 fused_ordering(251) 00:12:26.799 fused_ordering(252) 00:12:26.799 fused_ordering(253) 00:12:26.799 fused_ordering(254) 00:12:26.799 fused_ordering(255) 00:12:26.799 fused_ordering(256) 00:12:26.799 fused_ordering(257) 00:12:26.799 fused_ordering(258) 00:12:26.799 fused_ordering(259) 00:12:26.799 fused_ordering(260) 00:12:26.799 fused_ordering(261) 00:12:26.799 fused_ordering(262) 00:12:26.799 fused_ordering(263) 00:12:26.799 fused_ordering(264) 00:12:26.799 fused_ordering(265) 00:12:26.799 fused_ordering(266) 00:12:26.799 fused_ordering(267) 00:12:26.799 fused_ordering(268) 00:12:26.799 fused_ordering(269) 00:12:26.799 fused_ordering(270) 00:12:26.799 fused_ordering(271) 00:12:26.799 fused_ordering(272) 00:12:26.799 fused_ordering(273) 00:12:26.799 fused_ordering(274) 00:12:26.799 fused_ordering(275) 00:12:26.799 fused_ordering(276) 00:12:26.799 fused_ordering(277) 00:12:26.799 fused_ordering(278) 00:12:26.799 fused_ordering(279) 00:12:26.799 fused_ordering(280) 00:12:26.799 fused_ordering(281) 00:12:26.799 fused_ordering(282) 00:12:26.799 fused_ordering(283) 00:12:26.799 fused_ordering(284) 00:12:26.799 fused_ordering(285) 00:12:26.799 fused_ordering(286) 00:12:26.799 fused_ordering(287) 00:12:26.799 fused_ordering(288) 00:12:26.799 fused_ordering(289) 00:12:26.799 fused_ordering(290) 00:12:26.799 fused_ordering(291) 00:12:26.799 fused_ordering(292) 00:12:26.799 fused_ordering(293) 00:12:26.799 fused_ordering(294) 00:12:26.799 fused_ordering(295) 00:12:26.799 fused_ordering(296) 00:12:26.799 fused_ordering(297) 00:12:26.799 fused_ordering(298) 00:12:26.799 fused_ordering(299) 00:12:26.799 fused_ordering(300) 00:12:26.800 fused_ordering(301) 00:12:26.800 fused_ordering(302) 00:12:26.800 fused_ordering(303) 00:12:26.800 fused_ordering(304) 00:12:26.800 fused_ordering(305) 00:12:26.800 fused_ordering(306) 00:12:26.800 fused_ordering(307) 00:12:26.800 fused_ordering(308) 00:12:26.800 fused_ordering(309) 00:12:26.800 fused_ordering(310) 00:12:26.800 fused_ordering(311) 00:12:26.800 fused_ordering(312) 00:12:26.800 fused_ordering(313) 00:12:26.800 fused_ordering(314) 00:12:26.800 fused_ordering(315) 00:12:26.800 fused_ordering(316) 00:12:26.800 fused_ordering(317) 00:12:26.800 fused_ordering(318) 00:12:26.800 fused_ordering(319) 00:12:26.800 fused_ordering(320) 00:12:26.800 fused_ordering(321) 00:12:26.800 fused_ordering(322) 00:12:26.800 fused_ordering(323) 00:12:26.800 fused_ordering(324) 00:12:26.800 fused_ordering(325) 00:12:26.800 fused_ordering(326) 00:12:26.800 fused_ordering(327) 00:12:26.800 fused_ordering(328) 00:12:26.800 fused_ordering(329) 00:12:26.800 fused_ordering(330) 00:12:26.800 fused_ordering(331) 00:12:26.800 fused_ordering(332) 00:12:26.800 fused_ordering(333) 00:12:26.800 fused_ordering(334) 00:12:26.800 fused_ordering(335) 00:12:26.800 fused_ordering(336) 00:12:26.800 fused_ordering(337) 00:12:26.800 fused_ordering(338) 00:12:26.800 fused_ordering(339) 00:12:26.800 fused_ordering(340) 00:12:26.800 fused_ordering(341) 00:12:26.800 fused_ordering(342) 00:12:26.800 fused_ordering(343) 00:12:26.800 fused_ordering(344) 00:12:26.800 fused_ordering(345) 00:12:26.800 fused_ordering(346) 00:12:26.800 fused_ordering(347) 00:12:26.800 fused_ordering(348) 00:12:26.800 fused_ordering(349) 00:12:26.800 fused_ordering(350) 00:12:26.800 fused_ordering(351) 00:12:26.800 fused_ordering(352) 00:12:26.800 fused_ordering(353) 00:12:26.800 fused_ordering(354) 00:12:26.800 fused_ordering(355) 00:12:26.800 fused_ordering(356) 00:12:26.800 fused_ordering(357) 00:12:26.800 fused_ordering(358) 00:12:26.800 fused_ordering(359) 00:12:26.800 fused_ordering(360) 00:12:26.800 fused_ordering(361) 00:12:26.800 fused_ordering(362) 00:12:26.800 fused_ordering(363) 00:12:26.800 fused_ordering(364) 00:12:26.800 fused_ordering(365) 00:12:26.800 fused_ordering(366) 00:12:26.800 fused_ordering(367) 00:12:26.800 fused_ordering(368) 00:12:26.800 fused_ordering(369) 00:12:26.800 fused_ordering(370) 00:12:26.800 fused_ordering(371) 00:12:26.800 fused_ordering(372) 00:12:26.800 fused_ordering(373) 00:12:26.800 fused_ordering(374) 00:12:26.800 fused_ordering(375) 00:12:26.800 fused_ordering(376) 00:12:26.800 fused_ordering(377) 00:12:26.800 fused_ordering(378) 00:12:26.800 fused_ordering(379) 00:12:26.800 fused_ordering(380) 00:12:26.800 fused_ordering(381) 00:12:26.800 fused_ordering(382) 00:12:26.800 fused_ordering(383) 00:12:26.800 fused_ordering(384) 00:12:26.800 fused_ordering(385) 00:12:26.800 fused_ordering(386) 00:12:26.800 fused_ordering(387) 00:12:26.800 fused_ordering(388) 00:12:26.800 fused_ordering(389) 00:12:26.800 fused_ordering(390) 00:12:26.800 fused_ordering(391) 00:12:26.800 fused_ordering(392) 00:12:26.800 fused_ordering(393) 00:12:26.800 fused_ordering(394) 00:12:26.800 fused_ordering(395) 00:12:26.800 fused_ordering(396) 00:12:26.800 fused_ordering(397) 00:12:26.800 fused_ordering(398) 00:12:26.800 fused_ordering(399) 00:12:26.800 fused_ordering(400) 00:12:26.800 fused_ordering(401) 00:12:26.800 fused_ordering(402) 00:12:26.800 fused_ordering(403) 00:12:26.800 fused_ordering(404) 00:12:26.800 fused_ordering(405) 00:12:26.800 fused_ordering(406) 00:12:26.800 fused_ordering(407) 00:12:26.800 fused_ordering(408) 00:12:26.800 fused_ordering(409) 00:12:26.800 fused_ordering(410) 00:12:27.059 fused_ordering(411) 00:12:27.059 fused_ordering(412) 00:12:27.059 fused_ordering(413) 00:12:27.059 fused_ordering(414) 00:12:27.059 fused_ordering(415) 00:12:27.059 fused_ordering(416) 00:12:27.059 fused_ordering(417) 00:12:27.059 fused_ordering(418) 00:12:27.059 fused_ordering(419) 00:12:27.059 fused_ordering(420) 00:12:27.059 fused_ordering(421) 00:12:27.059 fused_ordering(422) 00:12:27.059 fused_ordering(423) 00:12:27.059 fused_ordering(424) 00:12:27.059 fused_ordering(425) 00:12:27.059 fused_ordering(426) 00:12:27.059 fused_ordering(427) 00:12:27.059 fused_ordering(428) 00:12:27.059 fused_ordering(429) 00:12:27.059 fused_ordering(430) 00:12:27.059 fused_ordering(431) 00:12:27.059 fused_ordering(432) 00:12:27.059 fused_ordering(433) 00:12:27.059 fused_ordering(434) 00:12:27.059 fused_ordering(435) 00:12:27.059 fused_ordering(436) 00:12:27.059 fused_ordering(437) 00:12:27.059 fused_ordering(438) 00:12:27.059 fused_ordering(439) 00:12:27.059 fused_ordering(440) 00:12:27.059 fused_ordering(441) 00:12:27.059 fused_ordering(442) 00:12:27.059 fused_ordering(443) 00:12:27.059 fused_ordering(444) 00:12:27.059 fused_ordering(445) 00:12:27.059 fused_ordering(446) 00:12:27.059 fused_ordering(447) 00:12:27.059 fused_ordering(448) 00:12:27.059 fused_ordering(449) 00:12:27.059 fused_ordering(450) 00:12:27.059 fused_ordering(451) 00:12:27.059 fused_ordering(452) 00:12:27.059 fused_ordering(453) 00:12:27.059 fused_ordering(454) 00:12:27.059 fused_ordering(455) 00:12:27.059 fused_ordering(456) 00:12:27.059 fused_ordering(457) 00:12:27.059 fused_ordering(458) 00:12:27.059 fused_ordering(459) 00:12:27.059 fused_ordering(460) 00:12:27.059 fused_ordering(461) 00:12:27.059 fused_ordering(462) 00:12:27.059 fused_ordering(463) 00:12:27.059 fused_ordering(464) 00:12:27.059 fused_ordering(465) 00:12:27.059 fused_ordering(466) 00:12:27.059 fused_ordering(467) 00:12:27.059 fused_ordering(468) 00:12:27.059 fused_ordering(469) 00:12:27.059 fused_ordering(470) 00:12:27.059 fused_ordering(471) 00:12:27.059 fused_ordering(472) 00:12:27.059 fused_ordering(473) 00:12:27.059 fused_ordering(474) 00:12:27.059 fused_ordering(475) 00:12:27.059 fused_ordering(476) 00:12:27.059 fused_ordering(477) 00:12:27.059 fused_ordering(478) 00:12:27.059 fused_ordering(479) 00:12:27.059 fused_ordering(480) 00:12:27.059 fused_ordering(481) 00:12:27.059 fused_ordering(482) 00:12:27.059 fused_ordering(483) 00:12:27.059 fused_ordering(484) 00:12:27.059 fused_ordering(485) 00:12:27.059 fused_ordering(486) 00:12:27.059 fused_ordering(487) 00:12:27.059 fused_ordering(488) 00:12:27.059 fused_ordering(489) 00:12:27.059 fused_ordering(490) 00:12:27.059 fused_ordering(491) 00:12:27.059 fused_ordering(492) 00:12:27.059 fused_ordering(493) 00:12:27.059 fused_ordering(494) 00:12:27.059 fused_ordering(495) 00:12:27.059 fused_ordering(496) 00:12:27.059 fused_ordering(497) 00:12:27.059 fused_ordering(498) 00:12:27.059 fused_ordering(499) 00:12:27.059 fused_ordering(500) 00:12:27.059 fused_ordering(501) 00:12:27.059 fused_ordering(502) 00:12:27.059 fused_ordering(503) 00:12:27.059 fused_ordering(504) 00:12:27.059 fused_ordering(505) 00:12:27.059 fused_ordering(506) 00:12:27.059 fused_ordering(507) 00:12:27.059 fused_ordering(508) 00:12:27.059 fused_ordering(509) 00:12:27.059 fused_ordering(510) 00:12:27.059 fused_ordering(511) 00:12:27.059 fused_ordering(512) 00:12:27.059 fused_ordering(513) 00:12:27.059 fused_ordering(514) 00:12:27.059 fused_ordering(515) 00:12:27.059 fused_ordering(516) 00:12:27.059 fused_ordering(517) 00:12:27.059 fused_ordering(518) 00:12:27.059 fused_ordering(519) 00:12:27.059 fused_ordering(520) 00:12:27.059 fused_ordering(521) 00:12:27.059 fused_ordering(522) 00:12:27.059 fused_ordering(523) 00:12:27.059 fused_ordering(524) 00:12:27.059 fused_ordering(525) 00:12:27.059 fused_ordering(526) 00:12:27.059 fused_ordering(527) 00:12:27.059 fused_ordering(528) 00:12:27.059 fused_ordering(529) 00:12:27.059 fused_ordering(530) 00:12:27.059 fused_ordering(531) 00:12:27.059 fused_ordering(532) 00:12:27.059 fused_ordering(533) 00:12:27.060 fused_ordering(534) 00:12:27.060 fused_ordering(535) 00:12:27.060 fused_ordering(536) 00:12:27.060 fused_ordering(537) 00:12:27.060 fused_ordering(538) 00:12:27.060 fused_ordering(539) 00:12:27.060 fused_ordering(540) 00:12:27.060 fused_ordering(541) 00:12:27.060 fused_ordering(542) 00:12:27.060 fused_ordering(543) 00:12:27.060 fused_ordering(544) 00:12:27.060 fused_ordering(545) 00:12:27.060 fused_ordering(546) 00:12:27.060 fused_ordering(547) 00:12:27.060 fused_ordering(548) 00:12:27.060 fused_ordering(549) 00:12:27.060 fused_ordering(550) 00:12:27.060 fused_ordering(551) 00:12:27.060 fused_ordering(552) 00:12:27.060 fused_ordering(553) 00:12:27.060 fused_ordering(554) 00:12:27.060 fused_ordering(555) 00:12:27.060 fused_ordering(556) 00:12:27.060 fused_ordering(557) 00:12:27.060 fused_ordering(558) 00:12:27.060 fused_ordering(559) 00:12:27.060 fused_ordering(560) 00:12:27.060 fused_ordering(561) 00:12:27.060 fused_ordering(562) 00:12:27.060 fused_ordering(563) 00:12:27.060 fused_ordering(564) 00:12:27.060 fused_ordering(565) 00:12:27.060 fused_ordering(566) 00:12:27.060 fused_ordering(567) 00:12:27.060 fused_ordering(568) 00:12:27.060 fused_ordering(569) 00:12:27.060 fused_ordering(570) 00:12:27.060 fused_ordering(571) 00:12:27.060 fused_ordering(572) 00:12:27.060 fused_ordering(573) 00:12:27.060 fused_ordering(574) 00:12:27.060 fused_ordering(575) 00:12:27.060 fused_ordering(576) 00:12:27.060 fused_ordering(577) 00:12:27.060 fused_ordering(578) 00:12:27.060 fused_ordering(579) 00:12:27.060 fused_ordering(580) 00:12:27.060 fused_ordering(581) 00:12:27.060 fused_ordering(582) 00:12:27.060 fused_ordering(583) 00:12:27.060 fused_ordering(584) 00:12:27.060 fused_ordering(585) 00:12:27.060 fused_ordering(586) 00:12:27.060 fused_ordering(587) 00:12:27.060 fused_ordering(588) 00:12:27.060 fused_ordering(589) 00:12:27.060 fused_ordering(590) 00:12:27.060 fused_ordering(591) 00:12:27.060 fused_ordering(592) 00:12:27.060 fused_ordering(593) 00:12:27.060 fused_ordering(594) 00:12:27.060 fused_ordering(595) 00:12:27.060 fused_ordering(596) 00:12:27.060 fused_ordering(597) 00:12:27.060 fused_ordering(598) 00:12:27.060 fused_ordering(599) 00:12:27.060 fused_ordering(600) 00:12:27.060 fused_ordering(601) 00:12:27.060 fused_ordering(602) 00:12:27.060 fused_ordering(603) 00:12:27.060 fused_ordering(604) 00:12:27.060 fused_ordering(605) 00:12:27.060 fused_ordering(606) 00:12:27.060 fused_ordering(607) 00:12:27.060 fused_ordering(608) 00:12:27.060 fused_ordering(609) 00:12:27.060 fused_ordering(610) 00:12:27.060 fused_ordering(611) 00:12:27.060 fused_ordering(612) 00:12:27.060 fused_ordering(613) 00:12:27.060 fused_ordering(614) 00:12:27.060 fused_ordering(615) 00:12:27.629 fused_ordering(616) 00:12:27.629 fused_ordering(617) 00:12:27.629 fused_ordering(618) 00:12:27.629 fused_ordering(619) 00:12:27.629 fused_ordering(620) 00:12:27.629 fused_ordering(621) 00:12:27.629 fused_ordering(622) 00:12:27.629 fused_ordering(623) 00:12:27.629 fused_ordering(624) 00:12:27.629 fused_ordering(625) 00:12:27.629 fused_ordering(626) 00:12:27.629 fused_ordering(627) 00:12:27.629 fused_ordering(628) 00:12:27.629 fused_ordering(629) 00:12:27.629 fused_ordering(630) 00:12:27.629 fused_ordering(631) 00:12:27.629 fused_ordering(632) 00:12:27.629 fused_ordering(633) 00:12:27.629 fused_ordering(634) 00:12:27.629 fused_ordering(635) 00:12:27.629 fused_ordering(636) 00:12:27.629 fused_ordering(637) 00:12:27.629 fused_ordering(638) 00:12:27.629 fused_ordering(639) 00:12:27.629 fused_ordering(640) 00:12:27.629 fused_ordering(641) 00:12:27.629 fused_ordering(642) 00:12:27.629 fused_ordering(643) 00:12:27.629 fused_ordering(644) 00:12:27.629 fused_ordering(645) 00:12:27.629 fused_ordering(646) 00:12:27.629 fused_ordering(647) 00:12:27.629 fused_ordering(648) 00:12:27.629 fused_ordering(649) 00:12:27.629 fused_ordering(650) 00:12:27.629 fused_ordering(651) 00:12:27.629 fused_ordering(652) 00:12:27.629 fused_ordering(653) 00:12:27.629 fused_ordering(654) 00:12:27.629 fused_ordering(655) 00:12:27.629 fused_ordering(656) 00:12:27.629 fused_ordering(657) 00:12:27.629 fused_ordering(658) 00:12:27.629 fused_ordering(659) 00:12:27.629 fused_ordering(660) 00:12:27.629 fused_ordering(661) 00:12:27.629 fused_ordering(662) 00:12:27.629 fused_ordering(663) 00:12:27.629 fused_ordering(664) 00:12:27.629 fused_ordering(665) 00:12:27.629 fused_ordering(666) 00:12:27.629 fused_ordering(667) 00:12:27.629 fused_ordering(668) 00:12:27.629 fused_ordering(669) 00:12:27.629 fused_ordering(670) 00:12:27.629 fused_ordering(671) 00:12:27.629 fused_ordering(672) 00:12:27.629 fused_ordering(673) 00:12:27.629 fused_ordering(674) 00:12:27.629 fused_ordering(675) 00:12:27.629 fused_ordering(676) 00:12:27.629 fused_ordering(677) 00:12:27.629 fused_ordering(678) 00:12:27.629 fused_ordering(679) 00:12:27.629 fused_ordering(680) 00:12:27.629 fused_ordering(681) 00:12:27.629 fused_ordering(682) 00:12:27.629 fused_ordering(683) 00:12:27.629 fused_ordering(684) 00:12:27.629 fused_ordering(685) 00:12:27.629 fused_ordering(686) 00:12:27.629 fused_ordering(687) 00:12:27.629 fused_ordering(688) 00:12:27.629 fused_ordering(689) 00:12:27.629 fused_ordering(690) 00:12:27.629 fused_ordering(691) 00:12:27.629 fused_ordering(692) 00:12:27.629 fused_ordering(693) 00:12:27.629 fused_ordering(694) 00:12:27.629 fused_ordering(695) 00:12:27.629 fused_ordering(696) 00:12:27.629 fused_ordering(697) 00:12:27.629 fused_ordering(698) 00:12:27.629 fused_ordering(699) 00:12:27.629 fused_ordering(700) 00:12:27.629 fused_ordering(701) 00:12:27.629 fused_ordering(702) 00:12:27.629 fused_ordering(703) 00:12:27.629 fused_ordering(704) 00:12:27.629 fused_ordering(705) 00:12:27.629 fused_ordering(706) 00:12:27.629 fused_ordering(707) 00:12:27.629 fused_ordering(708) 00:12:27.629 fused_ordering(709) 00:12:27.629 fused_ordering(710) 00:12:27.629 fused_ordering(711) 00:12:27.629 fused_ordering(712) 00:12:27.629 fused_ordering(713) 00:12:27.629 fused_ordering(714) 00:12:27.629 fused_ordering(715) 00:12:27.629 fused_ordering(716) 00:12:27.629 fused_ordering(717) 00:12:27.629 fused_ordering(718) 00:12:27.629 fused_ordering(719) 00:12:27.629 fused_ordering(720) 00:12:27.629 fused_ordering(721) 00:12:27.629 fused_ordering(722) 00:12:27.629 fused_ordering(723) 00:12:27.629 fused_ordering(724) 00:12:27.629 fused_ordering(725) 00:12:27.629 fused_ordering(726) 00:12:27.629 fused_ordering(727) 00:12:27.629 fused_ordering(728) 00:12:27.629 fused_ordering(729) 00:12:27.629 fused_ordering(730) 00:12:27.629 fused_ordering(731) 00:12:27.629 fused_ordering(732) 00:12:27.629 fused_ordering(733) 00:12:27.629 fused_ordering(734) 00:12:27.629 fused_ordering(735) 00:12:27.629 fused_ordering(736) 00:12:27.629 fused_ordering(737) 00:12:27.629 fused_ordering(738) 00:12:27.629 fused_ordering(739) 00:12:27.629 fused_ordering(740) 00:12:27.629 fused_ordering(741) 00:12:27.629 fused_ordering(742) 00:12:27.629 fused_ordering(743) 00:12:27.629 fused_ordering(744) 00:12:27.629 fused_ordering(745) 00:12:27.629 fused_ordering(746) 00:12:27.629 fused_ordering(747) 00:12:27.629 fused_ordering(748) 00:12:27.629 fused_ordering(749) 00:12:27.629 fused_ordering(750) 00:12:27.629 fused_ordering(751) 00:12:27.629 fused_ordering(752) 00:12:27.629 fused_ordering(753) 00:12:27.629 fused_ordering(754) 00:12:27.629 fused_ordering(755) 00:12:27.629 fused_ordering(756) 00:12:27.629 fused_ordering(757) 00:12:27.629 fused_ordering(758) 00:12:27.629 fused_ordering(759) 00:12:27.629 fused_ordering(760) 00:12:27.629 fused_ordering(761) 00:12:27.629 fused_ordering(762) 00:12:27.629 fused_ordering(763) 00:12:27.629 fused_ordering(764) 00:12:27.629 fused_ordering(765) 00:12:27.629 fused_ordering(766) 00:12:27.629 fused_ordering(767) 00:12:27.629 fused_ordering(768) 00:12:27.629 fused_ordering(769) 00:12:27.629 fused_ordering(770) 00:12:27.629 fused_ordering(771) 00:12:27.629 fused_ordering(772) 00:12:27.629 fused_ordering(773) 00:12:27.629 fused_ordering(774) 00:12:27.629 fused_ordering(775) 00:12:27.629 fused_ordering(776) 00:12:27.629 fused_ordering(777) 00:12:27.629 fused_ordering(778) 00:12:27.629 fused_ordering(779) 00:12:27.629 fused_ordering(780) 00:12:27.629 fused_ordering(781) 00:12:27.629 fused_ordering(782) 00:12:27.629 fused_ordering(783) 00:12:27.629 fused_ordering(784) 00:12:27.629 fused_ordering(785) 00:12:27.629 fused_ordering(786) 00:12:27.629 fused_ordering(787) 00:12:27.629 fused_ordering(788) 00:12:27.629 fused_ordering(789) 00:12:27.629 fused_ordering(790) 00:12:27.629 fused_ordering(791) 00:12:27.629 fused_ordering(792) 00:12:27.629 fused_ordering(793) 00:12:27.629 fused_ordering(794) 00:12:27.629 fused_ordering(795) 00:12:27.629 fused_ordering(796) 00:12:27.629 fused_ordering(797) 00:12:27.629 fused_ordering(798) 00:12:27.629 fused_ordering(799) 00:12:27.629 fused_ordering(800) 00:12:27.629 fused_ordering(801) 00:12:27.629 fused_ordering(802) 00:12:27.629 fused_ordering(803) 00:12:27.629 fused_ordering(804) 00:12:27.629 fused_ordering(805) 00:12:27.629 fused_ordering(806) 00:12:27.629 fused_ordering(807) 00:12:27.629 fused_ordering(808) 00:12:27.629 fused_ordering(809) 00:12:27.629 fused_ordering(810) 00:12:27.629 fused_ordering(811) 00:12:27.629 fused_ordering(812) 00:12:27.629 fused_ordering(813) 00:12:27.629 fused_ordering(814) 00:12:27.629 fused_ordering(815) 00:12:27.629 fused_ordering(816) 00:12:27.629 fused_ordering(817) 00:12:27.629 fused_ordering(818) 00:12:27.629 fused_ordering(819) 00:12:27.629 fused_ordering(820) 00:12:28.199 fused_ordering(821) 00:12:28.199 fused_ordering(822) 00:12:28.199 fused_ordering(823) 00:12:28.199 fused_ordering(824) 00:12:28.199 fused_ordering(825) 00:12:28.199 fused_ordering(826) 00:12:28.199 fused_ordering(827) 00:12:28.199 fused_ordering(828) 00:12:28.199 fused_ordering(829) 00:12:28.199 fused_ordering(830) 00:12:28.199 fused_ordering(831) 00:12:28.199 fused_ordering(832) 00:12:28.200 fused_ordering(833) 00:12:28.200 fused_ordering(834) 00:12:28.200 fused_ordering(835) 00:12:28.200 fused_ordering(836) 00:12:28.200 fused_ordering(837) 00:12:28.200 fused_ordering(838) 00:12:28.200 fused_ordering(839) 00:12:28.200 fused_ordering(840) 00:12:28.200 fused_ordering(841) 00:12:28.200 fused_ordering(842) 00:12:28.200 fused_ordering(843) 00:12:28.200 fused_ordering(844) 00:12:28.200 fused_ordering(845) 00:12:28.200 fused_ordering(846) 00:12:28.200 fused_ordering(847) 00:12:28.200 fused_ordering(848) 00:12:28.200 fused_ordering(849) 00:12:28.200 fused_ordering(850) 00:12:28.200 fused_ordering(851) 00:12:28.200 fused_ordering(852) 00:12:28.200 fused_ordering(853) 00:12:28.200 fused_ordering(854) 00:12:28.200 fused_ordering(855) 00:12:28.200 fused_ordering(856) 00:12:28.200 fused_ordering(857) 00:12:28.200 fused_ordering(858) 00:12:28.200 fused_ordering(859) 00:12:28.200 fused_ordering(860) 00:12:28.200 fused_ordering(861) 00:12:28.200 fused_ordering(862) 00:12:28.200 fused_ordering(863) 00:12:28.200 fused_ordering(864) 00:12:28.200 fused_ordering(865) 00:12:28.200 fused_ordering(866) 00:12:28.200 fused_ordering(867) 00:12:28.200 fused_ordering(868) 00:12:28.200 fused_ordering(869) 00:12:28.200 fused_ordering(870) 00:12:28.200 fused_ordering(871) 00:12:28.200 fused_ordering(872) 00:12:28.200 fused_ordering(873) 00:12:28.200 fused_ordering(874) 00:12:28.200 fused_ordering(875) 00:12:28.200 fused_ordering(876) 00:12:28.200 fused_ordering(877) 00:12:28.200 fused_ordering(878) 00:12:28.200 fused_ordering(879) 00:12:28.200 fused_ordering(880) 00:12:28.200 fused_ordering(881) 00:12:28.200 fused_ordering(882) 00:12:28.200 fused_ordering(883) 00:12:28.200 fused_ordering(884) 00:12:28.200 fused_ordering(885) 00:12:28.200 fused_ordering(886) 00:12:28.200 fused_ordering(887) 00:12:28.200 fused_ordering(888) 00:12:28.200 fused_ordering(889) 00:12:28.200 fused_ordering(890) 00:12:28.200 fused_ordering(891) 00:12:28.200 fused_ordering(892) 00:12:28.200 fused_ordering(893) 00:12:28.200 fused_ordering(894) 00:12:28.200 fused_ordering(895) 00:12:28.200 fused_ordering(896) 00:12:28.200 fused_ordering(897) 00:12:28.200 fused_ordering(898) 00:12:28.200 fused_ordering(899) 00:12:28.200 fused_ordering(900) 00:12:28.200 fused_ordering(901) 00:12:28.200 fused_ordering(902) 00:12:28.200 fused_ordering(903) 00:12:28.200 fused_ordering(904) 00:12:28.200 fused_ordering(905) 00:12:28.200 fused_ordering(906) 00:12:28.200 fused_ordering(907) 00:12:28.200 fused_ordering(908) 00:12:28.200 fused_ordering(909) 00:12:28.200 fused_ordering(910) 00:12:28.200 fused_ordering(911) 00:12:28.200 fused_ordering(912) 00:12:28.200 fused_ordering(913) 00:12:28.200 fused_ordering(914) 00:12:28.200 fused_ordering(915) 00:12:28.200 fused_ordering(916) 00:12:28.200 fused_ordering(917) 00:12:28.200 fused_ordering(918) 00:12:28.200 fused_ordering(919) 00:12:28.200 fused_ordering(920) 00:12:28.200 fused_ordering(921) 00:12:28.200 fused_ordering(922) 00:12:28.200 fused_ordering(923) 00:12:28.200 fused_ordering(924) 00:12:28.200 fused_ordering(925) 00:12:28.200 fused_ordering(926) 00:12:28.200 fused_ordering(927) 00:12:28.200 fused_ordering(928) 00:12:28.200 fused_ordering(929) 00:12:28.200 fused_ordering(930) 00:12:28.200 fused_ordering(931) 00:12:28.200 fused_ordering(932) 00:12:28.200 fused_ordering(933) 00:12:28.200 fused_ordering(934) 00:12:28.200 fused_ordering(935) 00:12:28.200 fused_ordering(936) 00:12:28.200 fused_ordering(937) 00:12:28.200 fused_ordering(938) 00:12:28.200 fused_ordering(939) 00:12:28.200 fused_ordering(940) 00:12:28.200 fused_ordering(941) 00:12:28.200 fused_ordering(942) 00:12:28.200 fused_ordering(943) 00:12:28.200 fused_ordering(944) 00:12:28.200 fused_ordering(945) 00:12:28.200 fused_ordering(946) 00:12:28.200 fused_ordering(947) 00:12:28.200 fused_ordering(948) 00:12:28.200 fused_ordering(949) 00:12:28.200 fused_ordering(950) 00:12:28.200 fused_ordering(951) 00:12:28.200 fused_ordering(952) 00:12:28.200 fused_ordering(953) 00:12:28.200 fused_ordering(954) 00:12:28.200 fused_ordering(955) 00:12:28.200 fused_ordering(956) 00:12:28.200 fused_ordering(957) 00:12:28.200 fused_ordering(958) 00:12:28.200 fused_ordering(959) 00:12:28.200 fused_ordering(960) 00:12:28.200 fused_ordering(961) 00:12:28.200 fused_ordering(962) 00:12:28.200 fused_ordering(963) 00:12:28.200 fused_ordering(964) 00:12:28.200 fused_ordering(965) 00:12:28.200 fused_ordering(966) 00:12:28.200 fused_ordering(967) 00:12:28.200 fused_ordering(968) 00:12:28.200 fused_ordering(969) 00:12:28.200 fused_ordering(970) 00:12:28.200 fused_ordering(971) 00:12:28.200 fused_ordering(972) 00:12:28.200 fused_ordering(973) 00:12:28.200 fused_ordering(974) 00:12:28.200 fused_ordering(975) 00:12:28.200 fused_ordering(976) 00:12:28.200 fused_ordering(977) 00:12:28.200 fused_ordering(978) 00:12:28.200 fused_ordering(979) 00:12:28.200 fused_ordering(980) 00:12:28.200 fused_ordering(981) 00:12:28.200 fused_ordering(982) 00:12:28.200 fused_ordering(983) 00:12:28.200 fused_ordering(984) 00:12:28.200 fused_ordering(985) 00:12:28.200 fused_ordering(986) 00:12:28.200 fused_ordering(987) 00:12:28.200 fused_ordering(988) 00:12:28.200 fused_ordering(989) 00:12:28.200 fused_ordering(990) 00:12:28.200 fused_ordering(991) 00:12:28.200 fused_ordering(992) 00:12:28.200 fused_ordering(993) 00:12:28.200 fused_ordering(994) 00:12:28.200 fused_ordering(995) 00:12:28.200 fused_ordering(996) 00:12:28.200 fused_ordering(997) 00:12:28.200 fused_ordering(998) 00:12:28.200 fused_ordering(999) 00:12:28.200 fused_ordering(1000) 00:12:28.200 fused_ordering(1001) 00:12:28.200 fused_ordering(1002) 00:12:28.200 fused_ordering(1003) 00:12:28.200 fused_ordering(1004) 00:12:28.200 fused_ordering(1005) 00:12:28.200 fused_ordering(1006) 00:12:28.200 fused_ordering(1007) 00:12:28.200 fused_ordering(1008) 00:12:28.200 fused_ordering(1009) 00:12:28.200 fused_ordering(1010) 00:12:28.200 fused_ordering(1011) 00:12:28.200 fused_ordering(1012) 00:12:28.200 fused_ordering(1013) 00:12:28.200 fused_ordering(1014) 00:12:28.200 fused_ordering(1015) 00:12:28.200 fused_ordering(1016) 00:12:28.200 fused_ordering(1017) 00:12:28.200 fused_ordering(1018) 00:12:28.200 fused_ordering(1019) 00:12:28.200 fused_ordering(1020) 00:12:28.200 fused_ordering(1021) 00:12:28.200 fused_ordering(1022) 00:12:28.200 fused_ordering(1023) 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:28.200 rmmod nvme_tcp 00:12:28.200 rmmod nvme_fabrics 00:12:28.200 rmmod nvme_keyring 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 585691 ']' 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 585691 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 585691 ']' 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 585691 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 585691 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 585691' 00:12:28.200 killing process with pid 585691 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 585691 00:12:28.200 15:17:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 585691 00:12:28.461 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:28.461 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:28.461 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:28.461 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:28.461 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:28.461 15:17:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.461 15:17:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.461 15:17:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.370 15:17:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:30.370 00:12:30.370 real 0m13.307s 00:12:30.370 user 0m7.012s 00:12:30.370 sys 0m6.865s 00:12:30.370 15:17:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:30.370 15:17:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:30.370 ************************************ 00:12:30.370 END TEST nvmf_fused_ordering 00:12:30.370 ************************************ 00:12:30.370 15:17:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:30.370 15:17:39 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:30.370 15:17:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:30.370 15:17:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.370 15:17:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:30.631 ************************************ 00:12:30.631 START TEST nvmf_delete_subsystem 00:12:30.631 ************************************ 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:30.631 * Looking for test storage... 00:12:30.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:30.631 15:17:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:38.765 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.765 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.765 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.765 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.765 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.765 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.765 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.765 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.765 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.765 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:38.765 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:38.766 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:38.766 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:38.766 Found net devices under 0000:31:00.0: cvl_0_0 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:38.766 Found net devices under 0000:31:00.1: cvl_0_1 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:12:38.766 00:12:38.766 --- 10.0.0.2 ping statistics --- 00:12:38.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.766 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:12:38.766 00:12:38.766 --- 10.0.0.1 ping statistics --- 00:12:38.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.766 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:38.766 15:17:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:38.766 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:38.766 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:38.766 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=590845 00:12:38.766 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 590845 00:12:38.766 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:38.766 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 590845 ']' 00:12:38.766 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.766 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.766 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.766 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.766 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:38.766 [2024-07-15 15:17:48.067150] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:12:38.766 [2024-07-15 15:17:48.067213] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.766 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.766 [2024-07-15 15:17:48.143727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:38.766 [2024-07-15 15:17:48.211761] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.766 [2024-07-15 15:17:48.211799] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.766 [2024-07-15 15:17:48.211807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.767 [2024-07-15 15:17:48.211813] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.767 [2024-07-15 15:17:48.211819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.767 [2024-07-15 15:17:48.211872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.767 [2024-07-15 15:17:48.211877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.337 [2024-07-15 15:17:48.863037] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.337 [2024-07-15 15:17:48.887233] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.337 NULL1 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.337 Delay0 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=591160 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:39.337 15:17:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:39.596 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.596 [2024-07-15 15:17:48.983860] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:41.508 15:17:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.508 15:17:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.508 15:17:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 [2024-07-15 15:17:51.146889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1238650 is same with the state(5) to be set 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 [2024-07-15 15:17:51.148225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235e90 is same with the state(5) to be set 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 starting I/O failed: -6 00:12:41.771 [2024-07-15 15:17:51.151603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f018400d430 is same with the state(5) to be set 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Read completed with error (sct=0, sc=8) 00:12:41.771 Write completed with error (sct=0, sc=8) 00:12:41.772 Write completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:41.772 Write completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:41.772 Write completed with error (sct=0, sc=8) 00:12:41.772 Write completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:41.772 Write completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:41.772 Write completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:41.772 Write completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:41.772 Write completed with error (sct=0, sc=8) 00:12:41.772 Write completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:41.772 Write completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:41.772 Read completed with error (sct=0, sc=8) 00:12:42.756 [2024-07-15 15:17:52.122089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1214500 is same with the state(5) to be set 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 [2024-07-15 15:17:52.150667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1234d00 is same with the state(5) to be set 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 [2024-07-15 15:17:52.150763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235cb0 is same with the state(5) to be set 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 [2024-07-15 15:17:52.153400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f018400cfe0 is same with the state(5) to be set 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Write completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 Read completed with error (sct=0, sc=8) 00:12:42.756 [2024-07-15 15:17:52.153971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f018400d740 is same with the state(5) to be set 00:12:42.756 Initializing NVMe Controllers 00:12:42.756 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:42.756 Controller IO queue size 128, less than required. 00:12:42.756 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:42.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:42.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:42.757 Initialization complete. Launching workers. 00:12:42.757 ======================================================== 00:12:42.757 Latency(us) 00:12:42.757 Device Information : IOPS MiB/s Average min max 00:12:42.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.44 0.08 899630.33 438.73 1043367.45 00:12:42.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.45 0.08 909569.69 288.83 1009405.40 00:12:42.757 ======================================================== 00:12:42.757 Total : 330.89 0.16 904540.13 288.83 1043367.45 00:12:42.757 00:12:42.757 [2024-07-15 15:17:52.154515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1214500 (9): Bad file descriptor 00:12:42.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:42.757 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.757 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:42.757 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 591160 00:12:42.757 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 591160 00:12:43.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (591160) - No such process 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 591160 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 591160 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 591160 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:43.328 [2024-07-15 15:17:52.687119] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=591842 00:12:43.328 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:43.329 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:43.329 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 591842 00:12:43.329 15:17:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:43.329 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.329 [2024-07-15 15:17:52.755013] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:43.898 15:17:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:43.898 15:17:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 591842 00:12:43.898 15:17:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:44.158 15:17:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:44.158 15:17:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 591842 00:12:44.159 15:17:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:44.729 15:17:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:44.729 15:17:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 591842 00:12:44.729 15:17:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:45.299 15:17:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:45.299 15:17:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 591842 00:12:45.299 15:17:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:45.870 15:17:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:45.870 15:17:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 591842 00:12:45.870 15:17:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:46.129 15:17:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:46.129 15:17:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 591842 00:12:46.130 15:17:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:46.389 Initializing NVMe Controllers 00:12:46.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:46.389 Controller IO queue size 128, less than required. 00:12:46.389 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:46.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:46.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:46.389 Initialization complete. Launching workers. 00:12:46.389 ======================================================== 00:12:46.389 Latency(us) 00:12:46.389 Device Information : IOPS MiB/s Average min max 00:12:46.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002463.19 1000197.64 1007608.12 00:12:46.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003747.16 1000156.30 1041438.74 00:12:46.389 ======================================================== 00:12:46.389 Total : 256.00 0.12 1003105.17 1000156.30 1041438.74 00:12:46.389 00:12:46.649 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:46.649 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 591842 00:12:46.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (591842) - No such process 00:12:46.649 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 591842 00:12:46.649 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:46.649 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:46.649 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:46.649 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:46.649 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:46.649 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:46.649 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:46.649 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:46.649 rmmod nvme_tcp 00:12:46.649 rmmod nvme_fabrics 00:12:46.908 rmmod nvme_keyring 00:12:46.908 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:46.908 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:46.908 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:46.908 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 590845 ']' 00:12:46.908 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 590845 00:12:46.908 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 590845 ']' 00:12:46.908 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 590845 00:12:46.908 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:12:46.908 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:46.908 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 590845 00:12:46.908 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:46.908 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:46.908 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 590845' 00:12:46.909 killing process with pid 590845 00:12:46.909 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 590845 00:12:46.909 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 590845 00:12:46.909 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:46.909 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:46.909 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:46.909 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:46.909 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:46.909 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.909 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.909 15:17:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.484 15:17:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:49.484 00:12:49.484 real 0m18.551s 00:12:49.484 user 0m30.909s 00:12:49.484 sys 0m6.641s 00:12:49.484 15:17:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:49.484 15:17:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:49.484 ************************************ 00:12:49.484 END TEST nvmf_delete_subsystem 00:12:49.484 ************************************ 00:12:49.484 15:17:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:49.484 15:17:58 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:49.484 15:17:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:49.484 15:17:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.484 15:17:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:49.484 ************************************ 00:12:49.484 START TEST nvmf_ns_masking 00:12:49.484 ************************************ 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:49.484 * Looking for test storage... 00:12:49.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=61593bec-01cb-4aa4-a699-513ce29280b0 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=9857a4c1-0f33-4ca5-a206-de4f56f4cf08 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=169619af-df06-4e1a-a84b-c9a42ce085c5 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:49.484 15:17:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:56.067 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:56.067 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.067 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:56.327 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:56.327 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.327 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:56.328 Found net devices under 0000:31:00.0: cvl_0_0 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:56.328 Found net devices under 0000:31:00.1: cvl_0_1 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:56.328 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:56.587 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:56.587 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:56.587 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:56.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:12:56.587 00:12:56.587 --- 10.0.0.2 ping statistics --- 00:12:56.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.587 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:12:56.587 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:56.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:12:56.587 00:12:56.587 --- 10.0.0.1 ping statistics --- 00:12:56.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.587 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:12:56.587 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.587 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:56.587 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:56.587 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.587 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:56.587 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:56.587 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.587 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:56.587 15:18:05 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:56.587 15:18:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:56.587 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:56.587 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:56.587 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:56.587 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=597175 00:12:56.587 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 597175 00:12:56.587 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 597175 ']' 00:12:56.587 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.587 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:56.587 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.587 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:56.587 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:56.587 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:56.587 [2024-07-15 15:18:06.061102] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:12:56.587 [2024-07-15 15:18:06.061164] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.587 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.587 [2024-07-15 15:18:06.139264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.846 [2024-07-15 15:18:06.212011] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.846 [2024-07-15 15:18:06.212052] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.846 [2024-07-15 15:18:06.212060] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.846 [2024-07-15 15:18:06.212067] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.846 [2024-07-15 15:18:06.212073] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.846 [2024-07-15 15:18:06.212093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.413 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.413 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:57.413 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:57.413 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:57.413 15:18:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:57.413 15:18:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.413 15:18:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:57.413 [2024-07-15 15:18:06.979091] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:57.413 15:18:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:57.413 15:18:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:57.413 15:18:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:57.672 Malloc1 00:12:57.672 15:18:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:57.931 Malloc2 00:12:57.931 15:18:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:57.931 15:18:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:58.190 15:18:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.190 [2024-07-15 15:18:07.788781] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.190 15:18:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:58.190 15:18:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 169619af-df06-4e1a-a84b-c9a42ce085c5 -a 10.0.0.2 -s 4420 -i 4 00:12:58.448 15:18:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.448 15:18:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:58.449 15:18:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.449 15:18:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:58.449 15:18:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:00.357 15:18:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:00.357 15:18:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:00.357 15:18:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.357 15:18:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:00.357 15:18:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.357 15:18:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:00.357 15:18:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:00.357 15:18:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:00.357 15:18:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:00.357 15:18:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:00.357 15:18:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:00.357 15:18:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:00.357 15:18:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:00.357 [ 0]:0x1 00:13:00.616 15:18:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:00.616 15:18:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:00.616 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8b7ed82b415b4936b3b9a6b22146bed5 00:13:00.616 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8b7ed82b415b4936b3b9a6b22146bed5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:00.616 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:00.616 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:00.616 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:00.616 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:00.616 [ 0]:0x1 00:13:00.616 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:00.616 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:00.616 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8b7ed82b415b4936b3b9a6b22146bed5 00:13:00.616 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8b7ed82b415b4936b3b9a6b22146bed5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:00.616 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:00.616 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:00.616 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:00.875 [ 1]:0x2 00:13:00.875 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:00.875 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:00.875 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=04ae7d35f6f24279bf24069a77df193d 00:13:00.875 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 04ae7d35f6f24279bf24069a77df193d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:00.875 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:00.875 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.875 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.135 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:01.135 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:01.135 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 169619af-df06-4e1a-a84b-c9a42ce085c5 -a 10.0.0.2 -s 4420 -i 4 00:13:01.395 15:18:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:01.395 15:18:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:01.395 15:18:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.395 15:18:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:01.395 15:18:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:01.395 15:18:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:03.312 15:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:03.573 15:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:03.573 15:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:03.573 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:03.573 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:03.573 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:03.573 15:18:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:03.573 15:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:03.573 15:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:03.573 15:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:03.573 [ 0]:0x2 00:13:03.573 15:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:03.573 15:18:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:03.573 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=04ae7d35f6f24279bf24069a77df193d 00:13:03.573 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 04ae7d35f6f24279bf24069a77df193d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:03.573 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:03.573 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:03.573 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:03.573 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:03.573 [ 0]:0x1 00:13:03.573 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:03.573 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8b7ed82b415b4936b3b9a6b22146bed5 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8b7ed82b415b4936b3b9a6b22146bed5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:03.834 [ 1]:0x2 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=04ae7d35f6f24279bf24069a77df193d 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 04ae7d35f6f24279bf24069a77df193d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:03.834 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.095 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:04.095 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.095 15:18:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:04.095 15:18:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:04.095 15:18:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:04.095 15:18:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:04.095 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:04.095 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.095 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:04.095 [ 0]:0x2 00:13:04.095 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:04.095 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.095 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=04ae7d35f6f24279bf24069a77df193d 00:13:04.095 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 04ae7d35f6f24279bf24069a77df193d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.095 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:04.095 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.095 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:04.355 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:04.355 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 169619af-df06-4e1a-a84b-c9a42ce085c5 -a 10.0.0.2 -s 4420 -i 4 00:13:04.355 15:18:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:04.355 15:18:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:04.355 15:18:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.355 15:18:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:04.355 15:18:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:04.355 15:18:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:06.896 15:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:06.896 15:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:06.896 15:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.896 15:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:06.896 15:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.896 15:18:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:06.896 15:18:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:06.896 15:18:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:06.896 [ 0]:0x1 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8b7ed82b415b4936b3b9a6b22146bed5 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8b7ed82b415b4936b3b9a6b22146bed5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:06.896 [ 1]:0x2 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=04ae7d35f6f24279bf24069a77df193d 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 04ae7d35f6f24279bf24069a77df193d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:06.896 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:06.897 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:06.897 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:06.897 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:06.897 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:06.897 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:06.897 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:06.897 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:06.897 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:06.897 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:06.897 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:06.897 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:06.897 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:06.897 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:06.897 [ 0]:0x2 00:13:06.897 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:06.897 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=04ae7d35f6f24279bf24069a77df193d 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 04ae7d35f6f24279bf24069a77df193d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:07.158 [2024-07-15 15:18:16.694308] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:07.158 request: 00:13:07.158 { 00:13:07.158 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.158 "nsid": 2, 00:13:07.158 "host": "nqn.2016-06.io.spdk:host1", 00:13:07.158 "method": "nvmf_ns_remove_host", 00:13:07.158 "req_id": 1 00:13:07.158 } 00:13:07.158 Got JSON-RPC error response 00:13:07.158 response: 00:13:07.158 { 00:13:07.158 "code": -32602, 00:13:07.158 "message": "Invalid parameters" 00:13:07.158 } 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:07.158 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:07.159 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:07.159 [ 0]:0x2 00:13:07.419 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:07.419 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:07.419 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=04ae7d35f6f24279bf24069a77df193d 00:13:07.419 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 04ae7d35f6f24279bf24069a77df193d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:07.419 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:07.419 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.419 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=599816 00:13:07.419 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.419 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:07.419 15:18:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 599816 /var/tmp/host.sock 00:13:07.419 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 599816 ']' 00:13:07.419 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:07.419 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:07.419 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:07.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:07.419 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:07.419 15:18:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:07.419 [2024-07-15 15:18:16.909153] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:13:07.419 [2024-07-15 15:18:16.909203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid599816 ] 00:13:07.419 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.419 [2024-07-15 15:18:16.972515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.419 [2024-07-15 15:18:17.037976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.362 15:18:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:08.362 15:18:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:08.362 15:18:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.362 15:18:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:08.362 15:18:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 61593bec-01cb-4aa4-a699-513ce29280b0 00:13:08.362 15:18:17 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:08.362 15:18:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 61593BEC01CB4AA4A699513CE29280B0 -i 00:13:08.623 15:18:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 9857a4c1-0f33-4ca5-a206-de4f56f4cf08 00:13:08.623 15:18:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:08.623 15:18:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 9857A4C10F334CA5A206DE4F56F4CF08 -i 00:13:08.884 15:18:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:08.884 15:18:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:09.145 15:18:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:09.145 15:18:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:09.404 nvme0n1 00:13:09.404 15:18:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:09.404 15:18:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:09.665 nvme1n2 00:13:09.665 15:18:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:09.665 15:18:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:09.665 15:18:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:09.665 15:18:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:09.665 15:18:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:09.926 15:18:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:09.926 15:18:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:09.926 15:18:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:09.926 15:18:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:09.926 15:18:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 61593bec-01cb-4aa4-a699-513ce29280b0 == \6\1\5\9\3\b\e\c\-\0\1\c\b\-\4\a\a\4\-\a\6\9\9\-\5\1\3\c\e\2\9\2\8\0\b\0 ]] 00:13:09.926 15:18:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:09.926 15:18:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:09.926 15:18:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:10.205 15:18:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 9857a4c1-0f33-4ca5-a206-de4f56f4cf08 == \9\8\5\7\a\4\c\1\-\0\f\3\3\-\4\c\a\5\-\a\2\0\6\-\d\e\4\f\5\6\f\4\c\f\0\8 ]] 00:13:10.205 15:18:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 599816 00:13:10.205 15:18:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 599816 ']' 00:13:10.205 15:18:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 599816 00:13:10.205 15:18:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:10.205 15:18:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:10.205 15:18:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 599816 00:13:10.205 15:18:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:10.205 15:18:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:10.205 15:18:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 599816' 00:13:10.205 killing process with pid 599816 00:13:10.205 15:18:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 599816 00:13:10.205 15:18:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 599816 00:13:10.466 15:18:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:10.726 rmmod nvme_tcp 00:13:10.726 rmmod nvme_fabrics 00:13:10.726 rmmod nvme_keyring 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 597175 ']' 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 597175 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 597175 ']' 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 597175 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 597175 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 597175' 00:13:10.726 killing process with pid 597175 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 597175 00:13:10.726 15:18:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 597175 00:13:10.986 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:10.986 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:10.986 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:10.986 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:10.986 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:10.986 15:18:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.986 15:18:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.986 15:18:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.896 15:18:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:12.896 00:13:12.896 real 0m23.830s 00:13:12.896 user 0m23.510s 00:13:12.896 sys 0m7.259s 00:13:12.896 15:18:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:12.896 15:18:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:12.896 ************************************ 00:13:12.896 END TEST nvmf_ns_masking 00:13:12.896 ************************************ 00:13:12.896 15:18:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:12.896 15:18:22 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:13:12.896 15:18:22 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:12.896 15:18:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:12.896 15:18:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:12.896 15:18:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:13.156 ************************************ 00:13:13.156 START TEST nvmf_nvme_cli 00:13:13.156 ************************************ 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:13.156 * Looking for test storage... 00:13:13.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.156 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:13:13.157 15:18:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:21.299 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:21.299 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:21.299 Found net devices under 0000:31:00.0: cvl_0_0 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:21.299 Found net devices under 0000:31:00.1: cvl_0_1 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.299 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.300 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.300 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:21.300 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.300 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.300 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:21.300 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.300 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.300 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:21.300 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:21.300 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.300 15:18:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:21.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:13:21.300 00:13:21.300 --- 10.0.0.2 ping statistics --- 00:13:21.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.300 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:13:21.300 00:13:21.300 --- 10.0.0.1 ping statistics --- 00:13:21.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.300 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=605065 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 605065 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 605065 ']' 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:21.300 15:18:30 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:21.300 [2024-07-15 15:18:30.249136] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:13:21.300 [2024-07-15 15:18:30.249200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.300 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.300 [2024-07-15 15:18:30.324918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:21.300 [2024-07-15 15:18:30.401789] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.300 [2024-07-15 15:18:30.401827] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.300 [2024-07-15 15:18:30.401835] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.300 [2024-07-15 15:18:30.401841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.300 [2024-07-15 15:18:30.401846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.300 [2024-07-15 15:18:30.401901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.300 [2024-07-15 15:18:30.401990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.300 [2024-07-15 15:18:30.402133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.300 [2024-07-15 15:18:30.402134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:21.560 [2024-07-15 15:18:31.084499] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:21.560 Malloc0 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:21.560 Malloc1 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:21.560 [2024-07-15 15:18:31.174345] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.560 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:21.821 15:18:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.821 15:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -a 10.0.0.2 -s 4420 00:13:21.821 00:13:21.821 Discovery Log Number of Records 2, Generation counter 2 00:13:21.821 =====Discovery Log Entry 0====== 00:13:21.821 trtype: tcp 00:13:21.821 adrfam: ipv4 00:13:21.821 subtype: current discovery subsystem 00:13:21.821 treq: not required 00:13:21.821 portid: 0 00:13:21.821 trsvcid: 4420 00:13:21.821 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:21.821 traddr: 10.0.0.2 00:13:21.821 eflags: explicit discovery connections, duplicate discovery information 00:13:21.821 sectype: none 00:13:21.821 =====Discovery Log Entry 1====== 00:13:21.821 trtype: tcp 00:13:21.821 adrfam: ipv4 00:13:21.821 subtype: nvme subsystem 00:13:21.821 treq: not required 00:13:21.821 portid: 0 00:13:21.821 trsvcid: 4420 00:13:21.821 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:21.821 traddr: 10.0.0.2 00:13:21.821 eflags: none 00:13:21.821 sectype: none 00:13:21.821 15:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:21.821 15:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:21.821 15:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:21.821 15:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:21.821 15:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:21.821 15:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:21.821 15:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:21.821 15:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:21.821 15:18:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:21.821 15:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:21.821 15:18:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:23.207 15:18:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:23.207 15:18:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:13:23.207 15:18:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.207 15:18:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:23.207 15:18:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:23.207 15:18:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:13:25.167 15:18:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:25.167 15:18:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:25.167 15:18:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:25.428 /dev/nvme0n1 ]] 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:25.428 15:18:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:25.688 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:25.688 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:25.688 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:25.688 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:25.688 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:25.688 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:25.688 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:25.688 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:25.688 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:25.688 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:25.688 15:18:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:25.688 15:18:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:25.948 rmmod nvme_tcp 00:13:25.948 rmmod nvme_fabrics 00:13:25.948 rmmod nvme_keyring 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 605065 ']' 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 605065 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 605065 ']' 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 605065 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 605065 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 605065' 00:13:25.948 killing process with pid 605065 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 605065 00:13:25.948 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 605065 00:13:26.209 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:26.209 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:26.209 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:26.209 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:26.209 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:26.209 15:18:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.209 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.209 15:18:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.121 15:18:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:28.383 00:13:28.383 real 0m15.188s 00:13:28.383 user 0m23.276s 00:13:28.383 sys 0m6.037s 00:13:28.383 15:18:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:28.383 15:18:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:28.383 ************************************ 00:13:28.383 END TEST nvmf_nvme_cli 00:13:28.383 ************************************ 00:13:28.383 15:18:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:28.383 15:18:37 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:13:28.383 15:18:37 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:28.383 15:18:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:28.383 15:18:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.383 15:18:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:28.383 ************************************ 00:13:28.383 START TEST nvmf_vfio_user 00:13:28.383 ************************************ 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:28.383 * Looking for test storage... 00:13:28.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=606624 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 606624' 00:13:28.383 Process pid: 606624 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 606624 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 606624 ']' 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:28.383 15:18:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:28.645 [2024-07-15 15:18:38.029077] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:13:28.645 [2024-07-15 15:18:38.029179] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.645 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.645 [2024-07-15 15:18:38.098863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.645 [2024-07-15 15:18:38.174621] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.645 [2024-07-15 15:18:38.174658] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.645 [2024-07-15 15:18:38.174665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.645 [2024-07-15 15:18:38.174672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.645 [2024-07-15 15:18:38.174677] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.645 [2024-07-15 15:18:38.174788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.645 [2024-07-15 15:18:38.174965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.645 [2024-07-15 15:18:38.175284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.645 [2024-07-15 15:18:38.175285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.215 15:18:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:29.215 15:18:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:29.215 15:18:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:30.600 15:18:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:30.600 15:18:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:30.600 15:18:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:30.600 15:18:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:30.600 15:18:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:30.600 15:18:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:30.600 Malloc1 00:13:30.600 15:18:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:30.858 15:18:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:31.119 15:18:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:31.119 15:18:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:31.119 15:18:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:31.119 15:18:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:31.380 Malloc2 00:13:31.380 15:18:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:31.640 15:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:31.641 15:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:31.902 15:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:31.902 15:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:31.902 15:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:31.902 15:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:31.902 15:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:31.902 15:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:31.902 [2024-07-15 15:18:41.383161] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:13:31.902 [2024-07-15 15:18:41.383209] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid607300 ] 00:13:31.902 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.902 [2024-07-15 15:18:41.415511] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:31.902 [2024-07-15 15:18:41.424245] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:31.902 [2024-07-15 15:18:41.424264] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f855e265000 00:13:31.902 [2024-07-15 15:18:41.425240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:31.902 [2024-07-15 15:18:41.426239] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:31.902 [2024-07-15 15:18:41.427248] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:31.902 [2024-07-15 15:18:41.428250] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:31.902 [2024-07-15 15:18:41.429255] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:31.902 [2024-07-15 15:18:41.430271] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:31.902 [2024-07-15 15:18:41.431272] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:31.902 [2024-07-15 15:18:41.432275] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:31.902 [2024-07-15 15:18:41.433287] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:31.902 [2024-07-15 15:18:41.433296] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f855e25a000 00:13:31.902 [2024-07-15 15:18:41.434623] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:31.902 [2024-07-15 15:18:41.451580] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:31.902 [2024-07-15 15:18:41.451607] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:31.902 [2024-07-15 15:18:41.456408] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:31.902 [2024-07-15 15:18:41.456455] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:31.902 [2024-07-15 15:18:41.456541] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:31.902 [2024-07-15 15:18:41.456557] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:31.902 [2024-07-15 15:18:41.456563] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:31.902 [2024-07-15 15:18:41.457411] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:31.902 [2024-07-15 15:18:41.457421] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:31.902 [2024-07-15 15:18:41.457428] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:31.902 [2024-07-15 15:18:41.458416] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:31.902 [2024-07-15 15:18:41.458425] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:31.902 [2024-07-15 15:18:41.458432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:31.902 [2024-07-15 15:18:41.459420] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:31.902 [2024-07-15 15:18:41.459428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:31.902 [2024-07-15 15:18:41.460424] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:31.902 [2024-07-15 15:18:41.460433] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:31.902 [2024-07-15 15:18:41.460438] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:31.903 [2024-07-15 15:18:41.460444] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:31.903 [2024-07-15 15:18:41.460550] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:31.903 [2024-07-15 15:18:41.460555] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:31.903 [2024-07-15 15:18:41.460563] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:31.903 [2024-07-15 15:18:41.461433] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:31.903 [2024-07-15 15:18:41.462436] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:31.903 [2024-07-15 15:18:41.463443] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:31.903 [2024-07-15 15:18:41.464445] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:31.903 [2024-07-15 15:18:41.464511] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:31.903 [2024-07-15 15:18:41.465456] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:31.903 [2024-07-15 15:18:41.465463] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:31.903 [2024-07-15 15:18:41.465468] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.465489] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:31.903 [2024-07-15 15:18:41.465497] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.465511] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:31.903 [2024-07-15 15:18:41.465516] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:31.903 [2024-07-15 15:18:41.465529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:31.903 [2024-07-15 15:18:41.465579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:31.903 [2024-07-15 15:18:41.465588] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:31.903 [2024-07-15 15:18:41.465596] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:31.903 [2024-07-15 15:18:41.465600] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:31.903 [2024-07-15 15:18:41.465605] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:31.903 [2024-07-15 15:18:41.465609] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:31.903 [2024-07-15 15:18:41.465614] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:31.903 [2024-07-15 15:18:41.465618] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.465625] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.465635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:31.903 [2024-07-15 15:18:41.465647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:31.903 [2024-07-15 15:18:41.465659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:31.903 [2024-07-15 15:18:41.465670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:31.903 [2024-07-15 15:18:41.465678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:31.903 [2024-07-15 15:18:41.465686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:31.903 [2024-07-15 15:18:41.465691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.465699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.465708] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:31.903 [2024-07-15 15:18:41.465722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:31.903 [2024-07-15 15:18:41.465727] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:31.903 [2024-07-15 15:18:41.465732] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.465738] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.465744] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.465753] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:31.903 [2024-07-15 15:18:41.465765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:31.903 [2024-07-15 15:18:41.465824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.465831] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.465839] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:31.903 [2024-07-15 15:18:41.465843] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:31.903 [2024-07-15 15:18:41.465850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:31.903 [2024-07-15 15:18:41.465863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:31.903 [2024-07-15 15:18:41.465872] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:31.903 [2024-07-15 15:18:41.465880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.465892] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.465899] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:31.903 [2024-07-15 15:18:41.465903] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:31.903 [2024-07-15 15:18:41.465909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:31.903 [2024-07-15 15:18:41.465928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:31.903 [2024-07-15 15:18:41.465941] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.465948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.465955] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:31.903 [2024-07-15 15:18:41.465959] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:31.903 [2024-07-15 15:18:41.465965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:31.903 [2024-07-15 15:18:41.465979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:31.903 [2024-07-15 15:18:41.465987] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.465993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.466000] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.466006] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.466011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.466016] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.466021] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:31.903 [2024-07-15 15:18:41.466026] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:31.903 [2024-07-15 15:18:41.466031] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:31.903 [2024-07-15 15:18:41.466048] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:31.903 [2024-07-15 15:18:41.466060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:31.903 [2024-07-15 15:18:41.466072] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:31.903 [2024-07-15 15:18:41.466081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:31.903 [2024-07-15 15:18:41.466092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:31.903 [2024-07-15 15:18:41.466103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:31.903 [2024-07-15 15:18:41.466114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:31.903 [2024-07-15 15:18:41.466124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:31.903 [2024-07-15 15:18:41.466137] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:31.903 [2024-07-15 15:18:41.466143] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:31.903 [2024-07-15 15:18:41.466147] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:31.903 [2024-07-15 15:18:41.466150] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:31.903 [2024-07-15 15:18:41.466156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:31.903 [2024-07-15 15:18:41.466164] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:31.903 [2024-07-15 15:18:41.466168] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:31.903 [2024-07-15 15:18:41.466174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:31.904 [2024-07-15 15:18:41.466181] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:31.904 [2024-07-15 15:18:41.466185] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:31.904 [2024-07-15 15:18:41.466191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:31.904 [2024-07-15 15:18:41.466199] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:31.904 [2024-07-15 15:18:41.466203] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:31.904 [2024-07-15 15:18:41.466209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:31.904 [2024-07-15 15:18:41.466216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:31.904 [2024-07-15 15:18:41.466228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:31.904 [2024-07-15 15:18:41.466238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:31.904 [2024-07-15 15:18:41.466245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:31.904 ===================================================== 00:13:31.904 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:31.904 ===================================================== 00:13:31.904 Controller Capabilities/Features 00:13:31.904 ================================ 00:13:31.904 Vendor ID: 4e58 00:13:31.904 Subsystem Vendor ID: 4e58 00:13:31.904 Serial Number: SPDK1 00:13:31.904 Model Number: SPDK bdev Controller 00:13:31.904 Firmware Version: 24.09 00:13:31.904 Recommended Arb Burst: 6 00:13:31.904 IEEE OUI Identifier: 8d 6b 50 00:13:31.904 Multi-path I/O 00:13:31.904 May have multiple subsystem ports: Yes 00:13:31.904 May have multiple controllers: Yes 00:13:31.904 Associated with SR-IOV VF: No 00:13:31.904 Max Data Transfer Size: 131072 00:13:31.904 Max Number of Namespaces: 32 00:13:31.904 Max Number of I/O Queues: 127 00:13:31.904 NVMe Specification Version (VS): 1.3 00:13:31.904 NVMe Specification Version (Identify): 1.3 00:13:31.904 Maximum Queue Entries: 256 00:13:31.904 Contiguous Queues Required: Yes 00:13:31.904 Arbitration Mechanisms Supported 00:13:31.904 Weighted Round Robin: Not Supported 00:13:31.904 Vendor Specific: Not Supported 00:13:31.904 Reset Timeout: 15000 ms 00:13:31.904 Doorbell Stride: 4 bytes 00:13:31.904 NVM Subsystem Reset: Not Supported 00:13:31.904 Command Sets Supported 00:13:31.904 NVM Command Set: Supported 00:13:31.904 Boot Partition: Not Supported 00:13:31.904 Memory Page Size Minimum: 4096 bytes 00:13:31.904 Memory Page Size Maximum: 4096 bytes 00:13:31.904 Persistent Memory Region: Not Supported 00:13:31.904 Optional Asynchronous Events Supported 00:13:31.904 Namespace Attribute Notices: Supported 00:13:31.904 Firmware Activation Notices: Not Supported 00:13:31.904 ANA Change Notices: Not Supported 00:13:31.904 PLE Aggregate Log Change Notices: Not Supported 00:13:31.904 LBA Status Info Alert Notices: Not Supported 00:13:31.904 EGE Aggregate Log Change Notices: Not Supported 00:13:31.904 Normal NVM Subsystem Shutdown event: Not Supported 00:13:31.904 Zone Descriptor Change Notices: Not Supported 00:13:31.904 Discovery Log Change Notices: Not Supported 00:13:31.904 Controller Attributes 00:13:31.904 128-bit Host Identifier: Supported 00:13:31.904 Non-Operational Permissive Mode: Not Supported 00:13:31.904 NVM Sets: Not Supported 00:13:31.904 Read Recovery Levels: Not Supported 00:13:31.904 Endurance Groups: Not Supported 00:13:31.904 Predictable Latency Mode: Not Supported 00:13:31.904 Traffic Based Keep ALive: Not Supported 00:13:31.904 Namespace Granularity: Not Supported 00:13:31.904 SQ Associations: Not Supported 00:13:31.904 UUID List: Not Supported 00:13:31.904 Multi-Domain Subsystem: Not Supported 00:13:31.904 Fixed Capacity Management: Not Supported 00:13:31.904 Variable Capacity Management: Not Supported 00:13:31.904 Delete Endurance Group: Not Supported 00:13:31.904 Delete NVM Set: Not Supported 00:13:31.904 Extended LBA Formats Supported: Not Supported 00:13:31.904 Flexible Data Placement Supported: Not Supported 00:13:31.904 00:13:31.904 Controller Memory Buffer Support 00:13:31.904 ================================ 00:13:31.904 Supported: No 00:13:31.904 00:13:31.904 Persistent Memory Region Support 00:13:31.904 ================================ 00:13:31.904 Supported: No 00:13:31.904 00:13:31.904 Admin Command Set Attributes 00:13:31.904 ============================ 00:13:31.904 Security Send/Receive: Not Supported 00:13:31.904 Format NVM: Not Supported 00:13:31.904 Firmware Activate/Download: Not Supported 00:13:31.904 Namespace Management: Not Supported 00:13:31.904 Device Self-Test: Not Supported 00:13:31.904 Directives: Not Supported 00:13:31.904 NVMe-MI: Not Supported 00:13:31.904 Virtualization Management: Not Supported 00:13:31.904 Doorbell Buffer Config: Not Supported 00:13:31.904 Get LBA Status Capability: Not Supported 00:13:31.904 Command & Feature Lockdown Capability: Not Supported 00:13:31.904 Abort Command Limit: 4 00:13:31.904 Async Event Request Limit: 4 00:13:31.904 Number of Firmware Slots: N/A 00:13:31.904 Firmware Slot 1 Read-Only: N/A 00:13:31.904 Firmware Activation Without Reset: N/A 00:13:31.904 Multiple Update Detection Support: N/A 00:13:31.904 Firmware Update Granularity: No Information Provided 00:13:31.904 Per-Namespace SMART Log: No 00:13:31.904 Asymmetric Namespace Access Log Page: Not Supported 00:13:31.904 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:31.904 Command Effects Log Page: Supported 00:13:31.904 Get Log Page Extended Data: Supported 00:13:31.904 Telemetry Log Pages: Not Supported 00:13:31.904 Persistent Event Log Pages: Not Supported 00:13:31.904 Supported Log Pages Log Page: May Support 00:13:31.904 Commands Supported & Effects Log Page: Not Supported 00:13:31.904 Feature Identifiers & Effects Log Page:May Support 00:13:31.904 NVMe-MI Commands & Effects Log Page: May Support 00:13:31.904 Data Area 4 for Telemetry Log: Not Supported 00:13:31.904 Error Log Page Entries Supported: 128 00:13:31.904 Keep Alive: Supported 00:13:31.904 Keep Alive Granularity: 10000 ms 00:13:31.904 00:13:31.904 NVM Command Set Attributes 00:13:31.904 ========================== 00:13:31.904 Submission Queue Entry Size 00:13:31.904 Max: 64 00:13:31.904 Min: 64 00:13:31.904 Completion Queue Entry Size 00:13:31.904 Max: 16 00:13:31.904 Min: 16 00:13:31.904 Number of Namespaces: 32 00:13:31.904 Compare Command: Supported 00:13:31.904 Write Uncorrectable Command: Not Supported 00:13:31.904 Dataset Management Command: Supported 00:13:31.904 Write Zeroes Command: Supported 00:13:31.904 Set Features Save Field: Not Supported 00:13:31.904 Reservations: Not Supported 00:13:31.904 Timestamp: Not Supported 00:13:31.904 Copy: Supported 00:13:31.904 Volatile Write Cache: Present 00:13:31.904 Atomic Write Unit (Normal): 1 00:13:31.904 Atomic Write Unit (PFail): 1 00:13:31.904 Atomic Compare & Write Unit: 1 00:13:31.904 Fused Compare & Write: Supported 00:13:31.904 Scatter-Gather List 00:13:31.904 SGL Command Set: Supported (Dword aligned) 00:13:31.904 SGL Keyed: Not Supported 00:13:31.904 SGL Bit Bucket Descriptor: Not Supported 00:13:31.904 SGL Metadata Pointer: Not Supported 00:13:31.904 Oversized SGL: Not Supported 00:13:31.904 SGL Metadata Address: Not Supported 00:13:31.904 SGL Offset: Not Supported 00:13:31.904 Transport SGL Data Block: Not Supported 00:13:31.904 Replay Protected Memory Block: Not Supported 00:13:31.904 00:13:31.904 Firmware Slot Information 00:13:31.904 ========================= 00:13:31.904 Active slot: 1 00:13:31.904 Slot 1 Firmware Revision: 24.09 00:13:31.904 00:13:31.904 00:13:31.904 Commands Supported and Effects 00:13:31.904 ============================== 00:13:31.904 Admin Commands 00:13:31.904 -------------- 00:13:31.904 Get Log Page (02h): Supported 00:13:31.904 Identify (06h): Supported 00:13:31.904 Abort (08h): Supported 00:13:31.904 Set Features (09h): Supported 00:13:31.904 Get Features (0Ah): Supported 00:13:31.904 Asynchronous Event Request (0Ch): Supported 00:13:31.904 Keep Alive (18h): Supported 00:13:31.904 I/O Commands 00:13:31.904 ------------ 00:13:31.904 Flush (00h): Supported LBA-Change 00:13:31.904 Write (01h): Supported LBA-Change 00:13:31.904 Read (02h): Supported 00:13:31.904 Compare (05h): Supported 00:13:31.904 Write Zeroes (08h): Supported LBA-Change 00:13:31.904 Dataset Management (09h): Supported LBA-Change 00:13:31.904 Copy (19h): Supported LBA-Change 00:13:31.904 00:13:31.904 Error Log 00:13:31.904 ========= 00:13:31.904 00:13:31.904 Arbitration 00:13:31.904 =========== 00:13:31.904 Arbitration Burst: 1 00:13:31.904 00:13:31.904 Power Management 00:13:31.904 ================ 00:13:31.904 Number of Power States: 1 00:13:31.904 Current Power State: Power State #0 00:13:31.904 Power State #0: 00:13:31.904 Max Power: 0.00 W 00:13:31.904 Non-Operational State: Operational 00:13:31.904 Entry Latency: Not Reported 00:13:31.904 Exit Latency: Not Reported 00:13:31.904 Relative Read Throughput: 0 00:13:31.904 Relative Read Latency: 0 00:13:31.904 Relative Write Throughput: 0 00:13:31.904 Relative Write Latency: 0 00:13:31.904 Idle Power: Not Reported 00:13:31.904 Active Power: Not Reported 00:13:31.904 Non-Operational Permissive Mode: Not Supported 00:13:31.904 00:13:31.904 Health Information 00:13:31.905 ================== 00:13:31.905 Critical Warnings: 00:13:31.905 Available Spare Space: OK 00:13:31.905 Temperature: OK 00:13:31.905 Device Reliability: OK 00:13:31.905 Read Only: No 00:13:31.905 Volatile Memory Backup: OK 00:13:31.905 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:31.905 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:31.905 Available Spare: 0% 00:13:31.905 Available Sp[2024-07-15 15:18:41.466347] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:31.905 [2024-07-15 15:18:41.466358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:31.905 [2024-07-15 15:18:41.466386] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:31.905 [2024-07-15 15:18:41.466395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.905 [2024-07-15 15:18:41.466401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.905 [2024-07-15 15:18:41.466407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.905 [2024-07-15 15:18:41.466414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:31.905 [2024-07-15 15:18:41.466465] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:31.905 [2024-07-15 15:18:41.466475] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:31.905 [2024-07-15 15:18:41.467470] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:31.905 [2024-07-15 15:18:41.467520] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:31.905 [2024-07-15 15:18:41.467528] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:31.905 [2024-07-15 15:18:41.468476] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:31.905 [2024-07-15 15:18:41.468487] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:31.905 [2024-07-15 15:18:41.468545] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:31.905 [2024-07-15 15:18:41.471892] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:31.905 are Threshold: 0% 00:13:31.905 Life Percentage Used: 0% 00:13:31.905 Data Units Read: 0 00:13:31.905 Data Units Written: 0 00:13:31.905 Host Read Commands: 0 00:13:31.905 Host Write Commands: 0 00:13:31.905 Controller Busy Time: 0 minutes 00:13:31.905 Power Cycles: 0 00:13:31.905 Power On Hours: 0 hours 00:13:31.905 Unsafe Shutdowns: 0 00:13:31.905 Unrecoverable Media Errors: 0 00:13:31.905 Lifetime Error Log Entries: 0 00:13:31.905 Warning Temperature Time: 0 minutes 00:13:31.905 Critical Temperature Time: 0 minutes 00:13:31.905 00:13:31.905 Number of Queues 00:13:31.905 ================ 00:13:31.905 Number of I/O Submission Queues: 127 00:13:31.905 Number of I/O Completion Queues: 127 00:13:31.905 00:13:31.905 Active Namespaces 00:13:31.905 ================= 00:13:31.905 Namespace ID:1 00:13:31.905 Error Recovery Timeout: Unlimited 00:13:31.905 Command Set Identifier: NVM (00h) 00:13:31.905 Deallocate: Supported 00:13:31.905 Deallocated/Unwritten Error: Not Supported 00:13:31.905 Deallocated Read Value: Unknown 00:13:31.905 Deallocate in Write Zeroes: Not Supported 00:13:31.905 Deallocated Guard Field: 0xFFFF 00:13:31.905 Flush: Supported 00:13:31.905 Reservation: Supported 00:13:31.905 Namespace Sharing Capabilities: Multiple Controllers 00:13:31.905 Size (in LBAs): 131072 (0GiB) 00:13:31.905 Capacity (in LBAs): 131072 (0GiB) 00:13:31.905 Utilization (in LBAs): 131072 (0GiB) 00:13:31.905 NGUID: B87CAD9BE9BE4BA897CF1A29B41130D8 00:13:31.905 UUID: b87cad9b-e9be-4ba8-97cf-1a29b41130d8 00:13:31.905 Thin Provisioning: Not Supported 00:13:31.905 Per-NS Atomic Units: Yes 00:13:31.905 Atomic Boundary Size (Normal): 0 00:13:31.905 Atomic Boundary Size (PFail): 0 00:13:31.905 Atomic Boundary Offset: 0 00:13:31.905 Maximum Single Source Range Length: 65535 00:13:31.905 Maximum Copy Length: 65535 00:13:31.905 Maximum Source Range Count: 1 00:13:31.905 NGUID/EUI64 Never Reused: No 00:13:31.905 Namespace Write Protected: No 00:13:31.905 Number of LBA Formats: 1 00:13:31.905 Current LBA Format: LBA Format #00 00:13:31.905 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:31.905 00:13:31.905 15:18:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:32.165 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.165 [2024-07-15 15:18:41.673573] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:37.447 Initializing NVMe Controllers 00:13:37.447 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:37.447 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:37.447 Initialization complete. Launching workers. 00:13:37.447 ======================================================== 00:13:37.447 Latency(us) 00:13:37.447 Device Information : IOPS MiB/s Average min max 00:13:37.448 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 44085.29 172.21 2902.77 905.31 7506.88 00:13:37.448 ======================================================== 00:13:37.448 Total : 44085.29 172.21 2902.77 905.31 7506.88 00:13:37.448 00:13:37.448 [2024-07-15 15:18:46.693185] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:37.448 15:18:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:37.448 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.448 [2024-07-15 15:18:46.897177] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:42.729 Initializing NVMe Controllers 00:13:42.729 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:42.729 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:42.729 Initialization complete. Launching workers. 00:13:42.729 ======================================================== 00:13:42.729 Latency(us) 00:13:42.729 Device Information : IOPS MiB/s Average min max 00:13:42.729 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16052.92 62.71 7978.90 5987.09 8979.29 00:13:42.729 ======================================================== 00:13:42.729 Total : 16052.92 62.71 7978.90 5987.09 8979.29 00:13:42.729 00:13:42.729 [2024-07-15 15:18:51.937900] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:42.729 15:18:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:42.729 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.729 [2024-07-15 15:18:52.171970] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:48.030 [2024-07-15 15:18:57.244091] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:48.030 Initializing NVMe Controllers 00:13:48.030 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:48.030 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:48.030 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:48.030 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:48.030 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:48.030 Initialization complete. Launching workers. 00:13:48.030 Starting thread on core 2 00:13:48.030 Starting thread on core 3 00:13:48.030 Starting thread on core 1 00:13:48.030 15:18:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:48.030 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.030 [2024-07-15 15:18:57.531308] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:51.367 [2024-07-15 15:19:00.597422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:51.367 Initializing NVMe Controllers 00:13:51.367 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:51.367 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:51.367 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:51.367 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:51.367 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:51.367 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:51.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:51.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:51.367 Initialization complete. Launching workers. 00:13:51.367 Starting thread on core 1 with urgent priority queue 00:13:51.367 Starting thread on core 2 with urgent priority queue 00:13:51.367 Starting thread on core 3 with urgent priority queue 00:13:51.367 Starting thread on core 0 with urgent priority queue 00:13:51.367 SPDK bdev Controller (SPDK1 ) core 0: 8367.33 IO/s 11.95 secs/100000 ios 00:13:51.367 SPDK bdev Controller (SPDK1 ) core 1: 12078.33 IO/s 8.28 secs/100000 ios 00:13:51.367 SPDK bdev Controller (SPDK1 ) core 2: 8168.00 IO/s 12.24 secs/100000 ios 00:13:51.367 SPDK bdev Controller (SPDK1 ) core 3: 11925.33 IO/s 8.39 secs/100000 ios 00:13:51.367 ======================================================== 00:13:51.367 00:13:51.367 15:19:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:51.367 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.367 [2024-07-15 15:19:00.880327] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:51.367 Initializing NVMe Controllers 00:13:51.367 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:51.367 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:51.367 Namespace ID: 1 size: 0GB 00:13:51.367 Initialization complete. 00:13:51.367 INFO: using host memory buffer for IO 00:13:51.367 Hello world! 00:13:51.367 [2024-07-15 15:19:00.917544] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:51.367 15:19:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:51.626 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.626 [2024-07-15 15:19:01.176374] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:53.009 Initializing NVMe Controllers 00:13:53.009 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:53.009 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:53.009 Initialization complete. Launching workers. 00:13:53.009 submit (in ns) avg, min, max = 8256.4, 3944.2, 4016156.7 00:13:53.009 complete (in ns) avg, min, max = 17013.3, 2368.3, 4008958.3 00:13:53.009 00:13:53.009 Submit histogram 00:13:53.009 ================ 00:13:53.009 Range in us Cumulative Count 00:13:53.009 3.920 - 3.947: 0.0104% ( 2) 00:13:53.009 3.947 - 3.973: 2.3677% ( 454) 00:13:53.009 3.973 - 4.000: 8.6868% ( 1217) 00:13:53.009 4.000 - 4.027: 17.7372% ( 1743) 00:13:53.009 4.027 - 4.053: 28.0648% ( 1989) 00:13:53.009 4.053 - 4.080: 39.2803% ( 2160) 00:13:53.009 4.080 - 4.107: 53.1440% ( 2670) 00:13:53.009 4.107 - 4.133: 68.1396% ( 2888) 00:13:53.009 4.133 - 4.160: 81.6346% ( 2599) 00:13:53.009 4.160 - 4.187: 91.1003% ( 1823) 00:13:53.009 4.187 - 4.213: 96.1940% ( 981) 00:13:53.009 4.213 - 4.240: 98.2034% ( 387) 00:13:53.009 4.240 - 4.267: 99.1017% ( 173) 00:13:53.009 4.267 - 4.293: 99.3561% ( 49) 00:13:53.009 4.293 - 4.320: 99.4133% ( 11) 00:13:53.009 4.320 - 4.347: 99.4496% ( 7) 00:13:53.009 4.347 - 4.373: 99.4548% ( 1) 00:13:53.009 4.373 - 4.400: 99.4704% ( 3) 00:13:53.009 4.453 - 4.480: 99.4756% ( 1) 00:13:53.009 4.507 - 4.533: 99.4860% ( 2) 00:13:53.009 4.667 - 4.693: 99.4911% ( 1) 00:13:53.009 4.800 - 4.827: 99.5015% ( 2) 00:13:53.009 4.827 - 4.853: 99.5067% ( 1) 00:13:53.009 4.960 - 4.987: 99.5119% ( 1) 00:13:53.009 5.093 - 5.120: 99.5171% ( 1) 00:13:53.009 5.387 - 5.413: 99.5275% ( 2) 00:13:53.009 5.440 - 5.467: 99.5379% ( 2) 00:13:53.009 5.547 - 5.573: 99.5431% ( 1) 00:13:53.009 5.600 - 5.627: 99.5483% ( 1) 00:13:53.009 5.947 - 5.973: 99.5535% ( 1) 00:13:53.009 5.973 - 6.000: 99.5638% ( 2) 00:13:53.009 6.053 - 6.080: 99.5690% ( 1) 00:13:53.009 6.080 - 6.107: 99.5950% ( 5) 00:13:53.009 6.133 - 6.160: 99.6002% ( 1) 00:13:53.009 6.187 - 6.213: 99.6054% ( 1) 00:13:53.009 6.213 - 6.240: 99.6313% ( 5) 00:13:53.009 6.240 - 6.267: 99.6365% ( 1) 00:13:53.009 6.267 - 6.293: 99.6469% ( 2) 00:13:53.009 6.293 - 6.320: 99.6521% ( 1) 00:13:53.009 6.320 - 6.347: 99.6573% ( 1) 00:13:53.009 6.400 - 6.427: 99.6625% ( 1) 00:13:53.009 6.427 - 6.453: 99.6781% ( 3) 00:13:53.009 6.453 - 6.480: 99.6833% ( 1) 00:13:53.009 6.480 - 6.507: 99.6936% ( 2) 00:13:53.009 6.560 - 6.587: 99.6988% ( 1) 00:13:53.009 6.587 - 6.613: 99.7092% ( 2) 00:13:53.009 6.613 - 6.640: 99.7196% ( 2) 00:13:53.009 6.640 - 6.667: 99.7300% ( 2) 00:13:53.009 6.667 - 6.693: 99.7352% ( 1) 00:13:53.009 6.720 - 6.747: 99.7508% ( 3) 00:13:53.009 6.747 - 6.773: 99.7612% ( 2) 00:13:53.009 6.773 - 6.800: 99.7663% ( 1) 00:13:53.009 6.800 - 6.827: 99.7715% ( 1) 00:13:53.009 6.827 - 6.880: 99.7767% ( 1) 00:13:53.009 6.880 - 6.933: 99.7871% ( 2) 00:13:53.009 7.040 - 7.093: 99.7923% ( 1) 00:13:53.009 7.093 - 7.147: 99.7975% ( 1) 00:13:53.009 7.147 - 7.200: 99.8079% ( 2) 00:13:53.009 7.200 - 7.253: 99.8235% ( 3) 00:13:53.009 7.253 - 7.307: 99.8338% ( 2) 00:13:53.009 7.307 - 7.360: 99.8442% ( 2) 00:13:53.009 7.467 - 7.520: 99.8546% ( 2) 00:13:53.009 7.520 - 7.573: 99.8598% ( 1) 00:13:53.009 7.627 - 7.680: 99.8650% ( 1) 00:13:53.009 7.893 - 7.947: 99.8702% ( 1) 00:13:53.009 7.947 - 8.000: 99.8754% ( 1) 00:13:53.009 8.747 - 8.800: 99.8806% ( 1) 00:13:53.009 12.747 - 12.800: 99.8858% ( 1) 00:13:53.009 13.867 - 13.973: 99.8910% ( 1) 00:13:53.009 13.973 - 14.080: 99.8962% ( 1) 00:13:53.009 3986.773 - 4014.080: 99.9948% ( 19) 00:13:53.009 4014.080 - 4041.387: 100.0000% ( 1) 00:13:53.009 00:13:53.009 Complete histogram 00:13:53.009 ================== 00:13:53.009 Range in us Cumulative Count 00:13:53.009 2.360 - 2.373: 0.0104% ( 2) 00:13:53.009 2.387 - 2.400: 0.5400% ( 102) 00:13:53.009 2.400 - 2.413: 0.8360% ( 57) 00:13:53.009 2.413 - 2.427: 0.9814% ( 28) 00:13:53.009 2.427 - 2.440: 1.1060% ( 24) 00:13:53.009 2.440 - [2024-07-15 15:19:02.198231] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:53.009 2.453: 1.9523% ( 163) 00:13:53.009 2.453 - 2.467: 44.2339% ( 8143) 00:13:53.009 2.467 - 2.480: 58.6427% ( 2775) 00:13:53.009 2.480 - 2.493: 70.6423% ( 2311) 00:13:53.009 2.493 - 2.507: 79.0851% ( 1626) 00:13:53.009 2.507 - 2.520: 81.8267% ( 528) 00:13:53.009 2.520 - 2.533: 85.8612% ( 777) 00:13:53.009 2.533 - 2.547: 91.4793% ( 1082) 00:13:53.009 2.547 - 2.560: 95.0049% ( 679) 00:13:53.009 2.560 - 2.573: 97.3207% ( 446) 00:13:53.009 2.573 - 2.587: 98.6188% ( 250) 00:13:53.009 2.587 - 2.600: 99.2160% ( 115) 00:13:53.009 2.600 - 2.613: 99.3250% ( 21) 00:13:53.009 2.613 - 2.627: 99.3613% ( 7) 00:13:53.009 2.627 - 2.640: 99.3665% ( 1) 00:13:53.009 2.680 - 2.693: 99.3717% ( 1) 00:13:53.009 4.213 - 4.240: 99.3769% ( 1) 00:13:53.009 4.373 - 4.400: 99.3821% ( 1) 00:13:53.009 4.480 - 4.507: 99.3873% ( 1) 00:13:53.009 4.507 - 4.533: 99.3977% ( 2) 00:13:53.009 4.560 - 4.587: 99.4029% ( 1) 00:13:53.009 4.613 - 4.640: 99.4133% ( 2) 00:13:53.009 4.640 - 4.667: 99.4185% ( 1) 00:13:53.009 4.667 - 4.693: 99.4236% ( 1) 00:13:53.009 4.800 - 4.827: 99.4340% ( 2) 00:13:53.009 4.827 - 4.853: 99.4392% ( 1) 00:13:53.009 4.853 - 4.880: 99.4496% ( 2) 00:13:53.009 4.880 - 4.907: 99.4548% ( 1) 00:13:53.009 4.933 - 4.960: 99.4600% ( 1) 00:13:53.009 4.960 - 4.987: 99.4652% ( 1) 00:13:53.010 4.987 - 5.013: 99.4756% ( 2) 00:13:53.010 5.040 - 5.067: 99.4860% ( 2) 00:13:53.010 5.227 - 5.253: 99.4911% ( 1) 00:13:53.010 5.253 - 5.280: 99.4963% ( 1) 00:13:53.010 5.307 - 5.333: 99.5015% ( 1) 00:13:53.010 5.333 - 5.360: 99.5067% ( 1) 00:13:53.010 5.360 - 5.387: 99.5119% ( 1) 00:13:53.010 5.387 - 5.413: 99.5171% ( 1) 00:13:53.010 5.413 - 5.440: 99.5275% ( 2) 00:13:53.010 5.440 - 5.467: 99.5327% ( 1) 00:13:53.010 5.493 - 5.520: 99.5431% ( 2) 00:13:53.010 5.573 - 5.600: 99.5535% ( 2) 00:13:53.010 5.600 - 5.627: 99.5586% ( 1) 00:13:53.010 5.707 - 5.733: 99.5638% ( 1) 00:13:53.010 5.813 - 5.840: 99.5690% ( 1) 00:13:53.010 5.840 - 5.867: 99.5742% ( 1) 00:13:53.010 5.947 - 5.973: 99.5794% ( 1) 00:13:53.010 6.053 - 6.080: 99.5898% ( 2) 00:13:53.010 6.267 - 6.293: 99.5950% ( 1) 00:13:53.010 6.320 - 6.347: 99.6002% ( 1) 00:13:53.010 6.453 - 6.480: 99.6054% ( 1) 00:13:53.010 7.520 - 7.573: 99.6106% ( 1) 00:13:53.010 8.267 - 8.320: 99.6158% ( 1) 00:13:53.010 10.293 - 10.347: 99.6210% ( 1) 00:13:53.010 10.400 - 10.453: 99.6261% ( 1) 00:13:53.010 11.573 - 11.627: 99.6313% ( 1) 00:13:53.010 12.533 - 12.587: 99.6365% ( 1) 00:13:53.010 3986.773 - 4014.080: 100.0000% ( 70) 00:13:53.010 00:13:53.010 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:53.010 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:53.010 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:53.010 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:53.010 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:53.010 [ 00:13:53.010 { 00:13:53.010 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:53.010 "subtype": "Discovery", 00:13:53.010 "listen_addresses": [], 00:13:53.010 "allow_any_host": true, 00:13:53.010 "hosts": [] 00:13:53.010 }, 00:13:53.010 { 00:13:53.010 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:53.010 "subtype": "NVMe", 00:13:53.010 "listen_addresses": [ 00:13:53.010 { 00:13:53.010 "trtype": "VFIOUSER", 00:13:53.010 "adrfam": "IPv4", 00:13:53.010 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:53.010 "trsvcid": "0" 00:13:53.010 } 00:13:53.010 ], 00:13:53.010 "allow_any_host": true, 00:13:53.010 "hosts": [], 00:13:53.010 "serial_number": "SPDK1", 00:13:53.010 "model_number": "SPDK bdev Controller", 00:13:53.010 "max_namespaces": 32, 00:13:53.010 "min_cntlid": 1, 00:13:53.010 "max_cntlid": 65519, 00:13:53.010 "namespaces": [ 00:13:53.010 { 00:13:53.010 "nsid": 1, 00:13:53.010 "bdev_name": "Malloc1", 00:13:53.010 "name": "Malloc1", 00:13:53.010 "nguid": "B87CAD9BE9BE4BA897CF1A29B41130D8", 00:13:53.010 "uuid": "b87cad9b-e9be-4ba8-97cf-1a29b41130d8" 00:13:53.010 } 00:13:53.010 ] 00:13:53.010 }, 00:13:53.010 { 00:13:53.010 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:53.010 "subtype": "NVMe", 00:13:53.010 "listen_addresses": [ 00:13:53.010 { 00:13:53.010 "trtype": "VFIOUSER", 00:13:53.010 "adrfam": "IPv4", 00:13:53.010 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:53.010 "trsvcid": "0" 00:13:53.010 } 00:13:53.010 ], 00:13:53.010 "allow_any_host": true, 00:13:53.010 "hosts": [], 00:13:53.010 "serial_number": "SPDK2", 00:13:53.010 "model_number": "SPDK bdev Controller", 00:13:53.010 "max_namespaces": 32, 00:13:53.010 "min_cntlid": 1, 00:13:53.010 "max_cntlid": 65519, 00:13:53.010 "namespaces": [ 00:13:53.010 { 00:13:53.010 "nsid": 1, 00:13:53.010 "bdev_name": "Malloc2", 00:13:53.010 "name": "Malloc2", 00:13:53.010 "nguid": "2DD85458170B4344A034DE8C1FF612DF", 00:13:53.010 "uuid": "2dd85458-170b-4344-a034-de8c1ff612df" 00:13:53.010 } 00:13:53.010 ] 00:13:53.010 } 00:13:53.010 ] 00:13:53.010 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:53.010 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=611539 00:13:53.010 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:53.010 15:19:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:53.010 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:53.010 15:19:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:53.010 15:19:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:53.010 15:19:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:53.010 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:53.010 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:53.010 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.010 Malloc3 00:13:53.010 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:53.010 [2024-07-15 15:19:02.599652] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:53.269 [2024-07-15 15:19:02.737512] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:53.269 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:53.269 Asynchronous Event Request test 00:13:53.269 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:53.269 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:53.270 Registering asynchronous event callbacks... 00:13:53.270 Starting namespace attribute notice tests for all controllers... 00:13:53.270 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:53.270 aer_cb - Changed Namespace 00:13:53.270 Cleaning up... 00:13:53.531 [ 00:13:53.531 { 00:13:53.531 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:53.531 "subtype": "Discovery", 00:13:53.531 "listen_addresses": [], 00:13:53.531 "allow_any_host": true, 00:13:53.531 "hosts": [] 00:13:53.531 }, 00:13:53.531 { 00:13:53.531 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:53.531 "subtype": "NVMe", 00:13:53.531 "listen_addresses": [ 00:13:53.531 { 00:13:53.531 "trtype": "VFIOUSER", 00:13:53.531 "adrfam": "IPv4", 00:13:53.531 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:53.531 "trsvcid": "0" 00:13:53.531 } 00:13:53.531 ], 00:13:53.531 "allow_any_host": true, 00:13:53.531 "hosts": [], 00:13:53.531 "serial_number": "SPDK1", 00:13:53.531 "model_number": "SPDK bdev Controller", 00:13:53.531 "max_namespaces": 32, 00:13:53.531 "min_cntlid": 1, 00:13:53.531 "max_cntlid": 65519, 00:13:53.531 "namespaces": [ 00:13:53.531 { 00:13:53.531 "nsid": 1, 00:13:53.531 "bdev_name": "Malloc1", 00:13:53.531 "name": "Malloc1", 00:13:53.531 "nguid": "B87CAD9BE9BE4BA897CF1A29B41130D8", 00:13:53.531 "uuid": "b87cad9b-e9be-4ba8-97cf-1a29b41130d8" 00:13:53.531 }, 00:13:53.531 { 00:13:53.531 "nsid": 2, 00:13:53.531 "bdev_name": "Malloc3", 00:13:53.531 "name": "Malloc3", 00:13:53.531 "nguid": "44DBE2E142DF4159AEB0412F40B0CCA3", 00:13:53.531 "uuid": "44dbe2e1-42df-4159-aeb0-412f40b0cca3" 00:13:53.531 } 00:13:53.531 ] 00:13:53.531 }, 00:13:53.531 { 00:13:53.531 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:53.531 "subtype": "NVMe", 00:13:53.531 "listen_addresses": [ 00:13:53.531 { 00:13:53.531 "trtype": "VFIOUSER", 00:13:53.531 "adrfam": "IPv4", 00:13:53.531 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:53.531 "trsvcid": "0" 00:13:53.531 } 00:13:53.531 ], 00:13:53.531 "allow_any_host": true, 00:13:53.531 "hosts": [], 00:13:53.531 "serial_number": "SPDK2", 00:13:53.531 "model_number": "SPDK bdev Controller", 00:13:53.531 "max_namespaces": 32, 00:13:53.531 "min_cntlid": 1, 00:13:53.531 "max_cntlid": 65519, 00:13:53.531 "namespaces": [ 00:13:53.531 { 00:13:53.531 "nsid": 1, 00:13:53.531 "bdev_name": "Malloc2", 00:13:53.531 "name": "Malloc2", 00:13:53.531 "nguid": "2DD85458170B4344A034DE8C1FF612DF", 00:13:53.531 "uuid": "2dd85458-170b-4344-a034-de8c1ff612df" 00:13:53.531 } 00:13:53.531 ] 00:13:53.531 } 00:13:53.531 ] 00:13:53.531 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 611539 00:13:53.531 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:53.531 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:53.531 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:53.531 15:19:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:53.531 [2024-07-15 15:19:02.955620] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:13:53.531 [2024-07-15 15:19:02.955665] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611607 ] 00:13:53.531 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.531 [2024-07-15 15:19:02.987440] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:53.531 [2024-07-15 15:19:02.996126] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:53.531 [2024-07-15 15:19:02.996147] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f25e04bf000 00:13:53.531 [2024-07-15 15:19:02.997125] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:53.531 [2024-07-15 15:19:02.998128] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:53.531 [2024-07-15 15:19:02.999142] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:53.531 [2024-07-15 15:19:03.000146] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:53.531 [2024-07-15 15:19:03.001148] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:53.531 [2024-07-15 15:19:03.002158] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:53.531 [2024-07-15 15:19:03.003162] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:53.531 [2024-07-15 15:19:03.004164] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:53.531 [2024-07-15 15:19:03.005175] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:53.531 [2024-07-15 15:19:03.005184] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f25e04b4000 00:13:53.531 [2024-07-15 15:19:03.006507] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:53.531 [2024-07-15 15:19:03.027041] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:53.531 [2024-07-15 15:19:03.027065] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:53.531 [2024-07-15 15:19:03.029118] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:53.531 [2024-07-15 15:19:03.029160] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:53.531 [2024-07-15 15:19:03.029241] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:53.531 [2024-07-15 15:19:03.029257] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:53.531 [2024-07-15 15:19:03.029262] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:53.531 [2024-07-15 15:19:03.030127] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:53.531 [2024-07-15 15:19:03.030136] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:53.532 [2024-07-15 15:19:03.030143] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:53.532 [2024-07-15 15:19:03.031134] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:53.532 [2024-07-15 15:19:03.031142] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:53.532 [2024-07-15 15:19:03.031150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:53.532 [2024-07-15 15:19:03.032139] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:53.532 [2024-07-15 15:19:03.032149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:53.532 [2024-07-15 15:19:03.033149] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:53.532 [2024-07-15 15:19:03.033158] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:53.532 [2024-07-15 15:19:03.033163] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:53.532 [2024-07-15 15:19:03.033169] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:53.532 [2024-07-15 15:19:03.033275] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:53.532 [2024-07-15 15:19:03.033280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:53.532 [2024-07-15 15:19:03.033284] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:53.532 [2024-07-15 15:19:03.034157] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:53.532 [2024-07-15 15:19:03.035162] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:53.532 [2024-07-15 15:19:03.036166] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:53.532 [2024-07-15 15:19:03.037180] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:53.532 [2024-07-15 15:19:03.037217] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:53.532 [2024-07-15 15:19:03.038187] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:53.532 [2024-07-15 15:19:03.038195] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:53.532 [2024-07-15 15:19:03.038200] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.038221] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:53.532 [2024-07-15 15:19:03.038228] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.038240] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:53.532 [2024-07-15 15:19:03.038245] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:53.532 [2024-07-15 15:19:03.038257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:53.532 [2024-07-15 15:19:03.044891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:53.532 [2024-07-15 15:19:03.044902] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:53.532 [2024-07-15 15:19:03.044909] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:53.532 [2024-07-15 15:19:03.044916] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:53.532 [2024-07-15 15:19:03.044921] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:53.532 [2024-07-15 15:19:03.044926] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:53.532 [2024-07-15 15:19:03.044930] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:53.532 [2024-07-15 15:19:03.044935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.044942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.044952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:53.532 [2024-07-15 15:19:03.052889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:53.532 [2024-07-15 15:19:03.052904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.532 [2024-07-15 15:19:03.052913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.532 [2024-07-15 15:19:03.052921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.532 [2024-07-15 15:19:03.052929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.532 [2024-07-15 15:19:03.052934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.052942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.052951] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:53.532 [2024-07-15 15:19:03.060890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:53.532 [2024-07-15 15:19:03.060898] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:53.532 [2024-07-15 15:19:03.060902] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.060909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.060914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.060923] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:53.532 [2024-07-15 15:19:03.068888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:53.532 [2024-07-15 15:19:03.068952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.068960] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.068970] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:53.532 [2024-07-15 15:19:03.068975] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:53.532 [2024-07-15 15:19:03.068981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:53.532 [2024-07-15 15:19:03.076890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:53.532 [2024-07-15 15:19:03.076901] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:53.532 [2024-07-15 15:19:03.076913] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.076921] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.076928] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:53.532 [2024-07-15 15:19:03.076932] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:53.532 [2024-07-15 15:19:03.076938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:53.532 [2024-07-15 15:19:03.084889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:53.532 [2024-07-15 15:19:03.084903] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.084911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.084918] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:53.532 [2024-07-15 15:19:03.084922] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:53.532 [2024-07-15 15:19:03.084928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:53.532 [2024-07-15 15:19:03.092888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:53.532 [2024-07-15 15:19:03.092897] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.092904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.092911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.092917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.092922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.092927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.092932] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:53.532 [2024-07-15 15:19:03.092936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:53.532 [2024-07-15 15:19:03.092941] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:53.532 [2024-07-15 15:19:03.092962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:53.532 [2024-07-15 15:19:03.100889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:53.532 [2024-07-15 15:19:03.100903] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:53.532 [2024-07-15 15:19:03.108890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:53.532 [2024-07-15 15:19:03.108903] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:53.533 [2024-07-15 15:19:03.115889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:53.533 [2024-07-15 15:19:03.115904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:53.533 [2024-07-15 15:19:03.124888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:53.533 [2024-07-15 15:19:03.124905] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:53.533 [2024-07-15 15:19:03.124909] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:53.533 [2024-07-15 15:19:03.124913] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:53.533 [2024-07-15 15:19:03.124917] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:53.533 [2024-07-15 15:19:03.124923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:53.533 [2024-07-15 15:19:03.124930] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:53.533 [2024-07-15 15:19:03.124935] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:53.533 [2024-07-15 15:19:03.124941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:53.533 [2024-07-15 15:19:03.124948] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:53.533 [2024-07-15 15:19:03.124952] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:53.533 [2024-07-15 15:19:03.124958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:53.533 [2024-07-15 15:19:03.124965] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:53.533 [2024-07-15 15:19:03.124969] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:53.533 [2024-07-15 15:19:03.124975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:53.533 [2024-07-15 15:19:03.132891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:53.533 [2024-07-15 15:19:03.132905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:53.533 [2024-07-15 15:19:03.132915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:53.533 [2024-07-15 15:19:03.132922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:53.533 ===================================================== 00:13:53.533 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:53.533 ===================================================== 00:13:53.533 Controller Capabilities/Features 00:13:53.533 ================================ 00:13:53.533 Vendor ID: 4e58 00:13:53.533 Subsystem Vendor ID: 4e58 00:13:53.533 Serial Number: SPDK2 00:13:53.533 Model Number: SPDK bdev Controller 00:13:53.533 Firmware Version: 24.09 00:13:53.533 Recommended Arb Burst: 6 00:13:53.533 IEEE OUI Identifier: 8d 6b 50 00:13:53.533 Multi-path I/O 00:13:53.533 May have multiple subsystem ports: Yes 00:13:53.533 May have multiple controllers: Yes 00:13:53.533 Associated with SR-IOV VF: No 00:13:53.533 Max Data Transfer Size: 131072 00:13:53.533 Max Number of Namespaces: 32 00:13:53.533 Max Number of I/O Queues: 127 00:13:53.533 NVMe Specification Version (VS): 1.3 00:13:53.533 NVMe Specification Version (Identify): 1.3 00:13:53.533 Maximum Queue Entries: 256 00:13:53.533 Contiguous Queues Required: Yes 00:13:53.533 Arbitration Mechanisms Supported 00:13:53.533 Weighted Round Robin: Not Supported 00:13:53.533 Vendor Specific: Not Supported 00:13:53.533 Reset Timeout: 15000 ms 00:13:53.533 Doorbell Stride: 4 bytes 00:13:53.533 NVM Subsystem Reset: Not Supported 00:13:53.533 Command Sets Supported 00:13:53.533 NVM Command Set: Supported 00:13:53.533 Boot Partition: Not Supported 00:13:53.533 Memory Page Size Minimum: 4096 bytes 00:13:53.533 Memory Page Size Maximum: 4096 bytes 00:13:53.533 Persistent Memory Region: Not Supported 00:13:53.533 Optional Asynchronous Events Supported 00:13:53.533 Namespace Attribute Notices: Supported 00:13:53.533 Firmware Activation Notices: Not Supported 00:13:53.533 ANA Change Notices: Not Supported 00:13:53.533 PLE Aggregate Log Change Notices: Not Supported 00:13:53.533 LBA Status Info Alert Notices: Not Supported 00:13:53.533 EGE Aggregate Log Change Notices: Not Supported 00:13:53.533 Normal NVM Subsystem Shutdown event: Not Supported 00:13:53.533 Zone Descriptor Change Notices: Not Supported 00:13:53.533 Discovery Log Change Notices: Not Supported 00:13:53.533 Controller Attributes 00:13:53.533 128-bit Host Identifier: Supported 00:13:53.533 Non-Operational Permissive Mode: Not Supported 00:13:53.533 NVM Sets: Not Supported 00:13:53.533 Read Recovery Levels: Not Supported 00:13:53.533 Endurance Groups: Not Supported 00:13:53.533 Predictable Latency Mode: Not Supported 00:13:53.533 Traffic Based Keep ALive: Not Supported 00:13:53.533 Namespace Granularity: Not Supported 00:13:53.533 SQ Associations: Not Supported 00:13:53.533 UUID List: Not Supported 00:13:53.533 Multi-Domain Subsystem: Not Supported 00:13:53.533 Fixed Capacity Management: Not Supported 00:13:53.533 Variable Capacity Management: Not Supported 00:13:53.533 Delete Endurance Group: Not Supported 00:13:53.533 Delete NVM Set: Not Supported 00:13:53.533 Extended LBA Formats Supported: Not Supported 00:13:53.533 Flexible Data Placement Supported: Not Supported 00:13:53.533 00:13:53.533 Controller Memory Buffer Support 00:13:53.533 ================================ 00:13:53.533 Supported: No 00:13:53.533 00:13:53.533 Persistent Memory Region Support 00:13:53.533 ================================ 00:13:53.533 Supported: No 00:13:53.533 00:13:53.533 Admin Command Set Attributes 00:13:53.533 ============================ 00:13:53.533 Security Send/Receive: Not Supported 00:13:53.533 Format NVM: Not Supported 00:13:53.533 Firmware Activate/Download: Not Supported 00:13:53.533 Namespace Management: Not Supported 00:13:53.533 Device Self-Test: Not Supported 00:13:53.533 Directives: Not Supported 00:13:53.533 NVMe-MI: Not Supported 00:13:53.533 Virtualization Management: Not Supported 00:13:53.533 Doorbell Buffer Config: Not Supported 00:13:53.533 Get LBA Status Capability: Not Supported 00:13:53.533 Command & Feature Lockdown Capability: Not Supported 00:13:53.533 Abort Command Limit: 4 00:13:53.533 Async Event Request Limit: 4 00:13:53.533 Number of Firmware Slots: N/A 00:13:53.533 Firmware Slot 1 Read-Only: N/A 00:13:53.533 Firmware Activation Without Reset: N/A 00:13:53.533 Multiple Update Detection Support: N/A 00:13:53.533 Firmware Update Granularity: No Information Provided 00:13:53.533 Per-Namespace SMART Log: No 00:13:53.533 Asymmetric Namespace Access Log Page: Not Supported 00:13:53.533 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:53.533 Command Effects Log Page: Supported 00:13:53.533 Get Log Page Extended Data: Supported 00:13:53.533 Telemetry Log Pages: Not Supported 00:13:53.533 Persistent Event Log Pages: Not Supported 00:13:53.533 Supported Log Pages Log Page: May Support 00:13:53.533 Commands Supported & Effects Log Page: Not Supported 00:13:53.533 Feature Identifiers & Effects Log Page:May Support 00:13:53.533 NVMe-MI Commands & Effects Log Page: May Support 00:13:53.533 Data Area 4 for Telemetry Log: Not Supported 00:13:53.533 Error Log Page Entries Supported: 128 00:13:53.533 Keep Alive: Supported 00:13:53.533 Keep Alive Granularity: 10000 ms 00:13:53.533 00:13:53.533 NVM Command Set Attributes 00:13:53.533 ========================== 00:13:53.533 Submission Queue Entry Size 00:13:53.533 Max: 64 00:13:53.533 Min: 64 00:13:53.533 Completion Queue Entry Size 00:13:53.533 Max: 16 00:13:53.533 Min: 16 00:13:53.533 Number of Namespaces: 32 00:13:53.533 Compare Command: Supported 00:13:53.533 Write Uncorrectable Command: Not Supported 00:13:53.533 Dataset Management Command: Supported 00:13:53.533 Write Zeroes Command: Supported 00:13:53.533 Set Features Save Field: Not Supported 00:13:53.533 Reservations: Not Supported 00:13:53.533 Timestamp: Not Supported 00:13:53.533 Copy: Supported 00:13:53.533 Volatile Write Cache: Present 00:13:53.533 Atomic Write Unit (Normal): 1 00:13:53.533 Atomic Write Unit (PFail): 1 00:13:53.533 Atomic Compare & Write Unit: 1 00:13:53.533 Fused Compare & Write: Supported 00:13:53.533 Scatter-Gather List 00:13:53.533 SGL Command Set: Supported (Dword aligned) 00:13:53.533 SGL Keyed: Not Supported 00:13:53.533 SGL Bit Bucket Descriptor: Not Supported 00:13:53.533 SGL Metadata Pointer: Not Supported 00:13:53.533 Oversized SGL: Not Supported 00:13:53.533 SGL Metadata Address: Not Supported 00:13:53.533 SGL Offset: Not Supported 00:13:53.533 Transport SGL Data Block: Not Supported 00:13:53.533 Replay Protected Memory Block: Not Supported 00:13:53.533 00:13:53.533 Firmware Slot Information 00:13:53.533 ========================= 00:13:53.533 Active slot: 1 00:13:53.533 Slot 1 Firmware Revision: 24.09 00:13:53.533 00:13:53.533 00:13:53.533 Commands Supported and Effects 00:13:53.533 ============================== 00:13:53.533 Admin Commands 00:13:53.533 -------------- 00:13:53.533 Get Log Page (02h): Supported 00:13:53.533 Identify (06h): Supported 00:13:53.533 Abort (08h): Supported 00:13:53.533 Set Features (09h): Supported 00:13:53.533 Get Features (0Ah): Supported 00:13:53.533 Asynchronous Event Request (0Ch): Supported 00:13:53.533 Keep Alive (18h): Supported 00:13:53.533 I/O Commands 00:13:53.533 ------------ 00:13:53.533 Flush (00h): Supported LBA-Change 00:13:53.534 Write (01h): Supported LBA-Change 00:13:53.534 Read (02h): Supported 00:13:53.534 Compare (05h): Supported 00:13:53.534 Write Zeroes (08h): Supported LBA-Change 00:13:53.534 Dataset Management (09h): Supported LBA-Change 00:13:53.534 Copy (19h): Supported LBA-Change 00:13:53.534 00:13:53.534 Error Log 00:13:53.534 ========= 00:13:53.534 00:13:53.534 Arbitration 00:13:53.534 =========== 00:13:53.534 Arbitration Burst: 1 00:13:53.534 00:13:53.534 Power Management 00:13:53.534 ================ 00:13:53.534 Number of Power States: 1 00:13:53.534 Current Power State: Power State #0 00:13:53.534 Power State #0: 00:13:53.534 Max Power: 0.00 W 00:13:53.534 Non-Operational State: Operational 00:13:53.534 Entry Latency: Not Reported 00:13:53.534 Exit Latency: Not Reported 00:13:53.534 Relative Read Throughput: 0 00:13:53.534 Relative Read Latency: 0 00:13:53.534 Relative Write Throughput: 0 00:13:53.534 Relative Write Latency: 0 00:13:53.534 Idle Power: Not Reported 00:13:53.534 Active Power: Not Reported 00:13:53.534 Non-Operational Permissive Mode: Not Supported 00:13:53.534 00:13:53.534 Health Information 00:13:53.534 ================== 00:13:53.534 Critical Warnings: 00:13:53.534 Available Spare Space: OK 00:13:53.534 Temperature: OK 00:13:53.534 Device Reliability: OK 00:13:53.534 Read Only: No 00:13:53.534 Volatile Memory Backup: OK 00:13:53.534 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:53.534 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:53.534 Available Spare: 0% 00:13:53.534 Available Sp[2024-07-15 15:19:03.133191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:53.534 [2024-07-15 15:19:03.136889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:53.534 [2024-07-15 15:19:03.136926] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:53.534 [2024-07-15 15:19:03.136935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.534 [2024-07-15 15:19:03.136942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.534 [2024-07-15 15:19:03.136948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.534 [2024-07-15 15:19:03.136954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.534 [2024-07-15 15:19:03.137311] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:53.534 [2024-07-15 15:19:03.137322] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:53.534 [2024-07-15 15:19:03.138310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:53.534 [2024-07-15 15:19:03.138357] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:53.534 [2024-07-15 15:19:03.138364] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:53.534 [2024-07-15 15:19:03.139316] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:53.534 [2024-07-15 15:19:03.139328] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:53.534 [2024-07-15 15:19:03.139377] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:53.534 [2024-07-15 15:19:03.140754] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:53.796 are Threshold: 0% 00:13:53.796 Life Percentage Used: 0% 00:13:53.796 Data Units Read: 0 00:13:53.796 Data Units Written: 0 00:13:53.796 Host Read Commands: 0 00:13:53.796 Host Write Commands: 0 00:13:53.796 Controller Busy Time: 0 minutes 00:13:53.796 Power Cycles: 0 00:13:53.796 Power On Hours: 0 hours 00:13:53.796 Unsafe Shutdowns: 0 00:13:53.796 Unrecoverable Media Errors: 0 00:13:53.796 Lifetime Error Log Entries: 0 00:13:53.796 Warning Temperature Time: 0 minutes 00:13:53.796 Critical Temperature Time: 0 minutes 00:13:53.796 00:13:53.796 Number of Queues 00:13:53.796 ================ 00:13:53.796 Number of I/O Submission Queues: 127 00:13:53.796 Number of I/O Completion Queues: 127 00:13:53.796 00:13:53.796 Active Namespaces 00:13:53.796 ================= 00:13:53.796 Namespace ID:1 00:13:53.796 Error Recovery Timeout: Unlimited 00:13:53.796 Command Set Identifier: NVM (00h) 00:13:53.796 Deallocate: Supported 00:13:53.796 Deallocated/Unwritten Error: Not Supported 00:13:53.796 Deallocated Read Value: Unknown 00:13:53.796 Deallocate in Write Zeroes: Not Supported 00:13:53.796 Deallocated Guard Field: 0xFFFF 00:13:53.796 Flush: Supported 00:13:53.796 Reservation: Supported 00:13:53.796 Namespace Sharing Capabilities: Multiple Controllers 00:13:53.796 Size (in LBAs): 131072 (0GiB) 00:13:53.796 Capacity (in LBAs): 131072 (0GiB) 00:13:53.796 Utilization (in LBAs): 131072 (0GiB) 00:13:53.796 NGUID: 2DD85458170B4344A034DE8C1FF612DF 00:13:53.796 UUID: 2dd85458-170b-4344-a034-de8c1ff612df 00:13:53.796 Thin Provisioning: Not Supported 00:13:53.796 Per-NS Atomic Units: Yes 00:13:53.796 Atomic Boundary Size (Normal): 0 00:13:53.796 Atomic Boundary Size (PFail): 0 00:13:53.796 Atomic Boundary Offset: 0 00:13:53.796 Maximum Single Source Range Length: 65535 00:13:53.796 Maximum Copy Length: 65535 00:13:53.796 Maximum Source Range Count: 1 00:13:53.796 NGUID/EUI64 Never Reused: No 00:13:53.796 Namespace Write Protected: No 00:13:53.796 Number of LBA Formats: 1 00:13:53.796 Current LBA Format: LBA Format #00 00:13:53.796 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:53.796 00:13:53.796 15:19:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:53.796 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.796 [2024-07-15 15:19:03.338165] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:59.082 Initializing NVMe Controllers 00:13:59.082 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:59.082 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:59.082 Initialization complete. Launching workers. 00:13:59.082 ======================================================== 00:13:59.082 Latency(us) 00:13:59.082 Device Information : IOPS MiB/s Average min max 00:13:59.082 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 44092.34 172.24 2902.28 914.28 7794.78 00:13:59.082 ======================================================== 00:13:59.082 Total : 44092.34 172.24 2902.28 914.28 7794.78 00:13:59.082 00:13:59.082 [2024-07-15 15:19:08.444102] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:59.082 15:19:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:59.082 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.082 [2024-07-15 15:19:08.647678] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:04.385 Initializing NVMe Controllers 00:14:04.385 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:04.385 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:04.385 Initialization complete. Launching workers. 00:14:04.385 ======================================================== 00:14:04.385 Latency(us) 00:14:04.385 Device Information : IOPS MiB/s Average min max 00:14:04.385 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33557.36 131.08 3813.23 1211.74 8987.07 00:14:04.385 ======================================================== 00:14:04.385 Total : 33557.36 131.08 3813.23 1211.74 8987.07 00:14:04.385 00:14:04.385 [2024-07-15 15:19:13.664262] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:04.385 15:19:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:04.385 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.385 [2024-07-15 15:19:13.895790] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:09.670 [2024-07-15 15:19:19.042026] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:09.670 Initializing NVMe Controllers 00:14:09.670 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:09.670 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:09.670 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:09.670 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:09.670 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:09.670 Initialization complete. Launching workers. 00:14:09.670 Starting thread on core 2 00:14:09.670 Starting thread on core 3 00:14:09.670 Starting thread on core 1 00:14:09.670 15:19:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:09.670 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.930 [2024-07-15 15:19:19.316303] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:13.225 [2024-07-15 15:19:22.361264] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:13.225 Initializing NVMe Controllers 00:14:13.225 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:13.225 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:13.225 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:13.225 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:13.225 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:13.225 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:13.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:13.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:13.225 Initialization complete. Launching workers. 00:14:13.225 Starting thread on core 1 with urgent priority queue 00:14:13.225 Starting thread on core 2 with urgent priority queue 00:14:13.225 Starting thread on core 3 with urgent priority queue 00:14:13.225 Starting thread on core 0 with urgent priority queue 00:14:13.225 SPDK bdev Controller (SPDK2 ) core 0: 12025.33 IO/s 8.32 secs/100000 ios 00:14:13.225 SPDK bdev Controller (SPDK2 ) core 1: 13092.67 IO/s 7.64 secs/100000 ios 00:14:13.225 SPDK bdev Controller (SPDK2 ) core 2: 8036.00 IO/s 12.44 secs/100000 ios 00:14:13.225 SPDK bdev Controller (SPDK2 ) core 3: 13826.67 IO/s 7.23 secs/100000 ios 00:14:13.225 ======================================================== 00:14:13.225 00:14:13.225 15:19:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:13.225 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.225 [2024-07-15 15:19:22.641332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:13.225 Initializing NVMe Controllers 00:14:13.225 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:13.225 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:13.225 Namespace ID: 1 size: 0GB 00:14:13.225 Initialization complete. 00:14:13.225 INFO: using host memory buffer for IO 00:14:13.225 Hello world! 00:14:13.225 [2024-07-15 15:19:22.651404] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:13.225 15:19:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:13.225 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.484 [2024-07-15 15:19:22.910139] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:14.424 Initializing NVMe Controllers 00:14:14.424 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:14.424 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:14.424 Initialization complete. Launching workers. 00:14:14.424 submit (in ns) avg, min, max = 7800.1, 3896.7, 4995695.8 00:14:14.424 complete (in ns) avg, min, max = 25226.6, 2370.0, 7987934.2 00:14:14.424 00:14:14.424 Submit histogram 00:14:14.424 ================ 00:14:14.424 Range in us Cumulative Count 00:14:14.424 3.893 - 3.920: 1.5318% ( 228) 00:14:14.424 3.920 - 3.947: 7.2360% ( 849) 00:14:14.424 3.947 - 3.973: 17.4281% ( 1517) 00:14:14.424 3.973 - 4.000: 27.6471% ( 1521) 00:14:14.424 4.000 - 4.027: 38.0744% ( 1552) 00:14:14.424 4.027 - 4.053: 50.3561% ( 1828) 00:14:14.424 4.053 - 4.080: 66.1314% ( 2348) 00:14:14.424 4.080 - 4.107: 80.4824% ( 2136) 00:14:14.424 4.107 - 4.133: 91.4606% ( 1634) 00:14:14.424 4.133 - 4.160: 96.8355% ( 800) 00:14:14.424 4.160 - 4.187: 98.7906% ( 291) 00:14:14.424 4.187 - 4.213: 99.3483% ( 83) 00:14:14.424 4.213 - 4.240: 99.4625% ( 17) 00:14:14.424 4.240 - 4.267: 99.5297% ( 10) 00:14:14.424 4.267 - 4.293: 99.5499% ( 3) 00:14:14.424 4.320 - 4.347: 99.5566% ( 1) 00:14:14.424 4.347 - 4.373: 99.5633% ( 1) 00:14:14.424 4.507 - 4.533: 99.5700% ( 1) 00:14:14.424 4.640 - 4.667: 99.5767% ( 1) 00:14:14.424 4.773 - 4.800: 99.5834% ( 1) 00:14:14.424 5.493 - 5.520: 99.5902% ( 1) 00:14:14.424 5.893 - 5.920: 99.6103% ( 3) 00:14:14.424 5.920 - 5.947: 99.6305% ( 3) 00:14:14.424 5.947 - 5.973: 99.6372% ( 1) 00:14:14.424 5.973 - 6.000: 99.6439% ( 1) 00:14:14.424 6.000 - 6.027: 99.6641% ( 3) 00:14:14.424 6.027 - 6.053: 99.6708% ( 1) 00:14:14.424 6.160 - 6.187: 99.6842% ( 2) 00:14:14.424 6.187 - 6.213: 99.6909% ( 1) 00:14:14.424 6.213 - 6.240: 99.6977% ( 1) 00:14:14.424 6.240 - 6.267: 99.7044% ( 1) 00:14:14.424 6.267 - 6.293: 99.7111% ( 1) 00:14:14.424 6.320 - 6.347: 99.7245% ( 2) 00:14:14.424 6.347 - 6.373: 99.7380% ( 2) 00:14:14.424 6.427 - 6.453: 99.7447% ( 1) 00:14:14.424 6.453 - 6.480: 99.7514% ( 1) 00:14:14.424 6.533 - 6.560: 99.7581% ( 1) 00:14:14.424 6.560 - 6.587: 99.7783% ( 3) 00:14:14.424 6.587 - 6.613: 99.7850% ( 1) 00:14:14.424 6.613 - 6.640: 99.7917% ( 1) 00:14:14.424 6.640 - 6.667: 99.8052% ( 2) 00:14:14.424 6.720 - 6.747: 99.8186% ( 2) 00:14:14.424 6.827 - 6.880: 99.8320% ( 2) 00:14:14.424 7.040 - 7.093: 99.8388% ( 1) 00:14:14.424 7.307 - 7.360: 99.8455% ( 1) 00:14:14.424 7.680 - 7.733: 99.8522% ( 1) 00:14:14.424 7.733 - 7.787: 99.8589% ( 1) 00:14:14.424 7.787 - 7.840: 99.8656% ( 1) 00:14:14.424 9.387 - 9.440: 99.8723% ( 1) 00:14:14.424 9.440 - 9.493: 99.8791% ( 1) 00:14:14.424 12.480 - 12.533: 99.8858% ( 1) 00:14:14.424 12.640 - 12.693: 99.8992% ( 2) 00:14:14.424 13.973 - 14.080: 99.9059% ( 1) 00:14:14.424 3003.733 - 3017.387: 99.9127% ( 1) 00:14:14.424 3795.627 - 3822.933: 99.9194% ( 1) 00:14:14.424 3986.773 - 4014.080: 99.9933% ( 11) 00:14:14.424 4969.813 - 4997.120: 100.0000% ( 1) 00:14:14.424 00:14:14.424 Complete histogram 00:14:14.424 ================== 00:14:14.424 Range in us Cumulative Count 00:14:14.424 2.360 - 2.373: 0.0067% ( 1) 00:14:14.424 2.373 - 2.387: 0.0134% ( 1) 00:14:14.424 2.387 - 2.400: 0.1680% ( 23) 00:14:14.424 2.400 - 2.413: 1.0481% ( 131) 00:14:14.424 2.413 - 2.427: 1.1153% ( 10) 00:14:14.424 2.427 - 2.440: 32.5181% ( 4674) 00:14:14.424 2.440 - 2.453: 53.1779% ( 3075) 00:14:14.424 2.453 - 2.467: 65.6611% ( 1858) 00:14:14.424 2.467 - 2.480: 75.5308% ( 1469) 00:14:14.424 2.480 - 2.493: 80.9325% ( 804) 00:14:14.424 2.493 - 2.507: 82.4510% ( 226) 00:14:14.424 2.507 - 2.520: 86.3410% ( 579) 00:14:14.424 2.520 - 2.533: 91.6152% ( 785) 00:14:14.424 2.533 - 2.547: 95.0148% ( 506) 00:14:14.424 2.547 - 2.560: 97.1513% ( 318) 00:14:14.425 2.560 - 2.573: 98.3942% ( 185) 00:14:14.425 2.573 - 2.587: 99.0325% ( 95) 00:14:14.425 2.587 - 2.600: 99.1333% ( 15) 00:14:14.425 2.600 - 2.613: 99.1669% ( 5) 00:14:14.425 2.613 - 2.627: 99.1736% ( 1) 00:14:14.425 2.627 - [2024-07-15 15:19:24.004528] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:14.718 2.640: 99.1803% ( 1) 00:14:14.718 2.720 - 2.733: 99.1870% ( 1) 00:14:14.718 4.267 - 4.293: 99.1938% ( 1) 00:14:14.718 4.453 - 4.480: 99.2005% ( 1) 00:14:14.718 4.480 - 4.507: 99.2072% ( 1) 00:14:14.718 4.507 - 4.533: 99.2206% ( 2) 00:14:14.718 4.560 - 4.587: 99.2274% ( 1) 00:14:14.718 4.587 - 4.613: 99.2341% ( 1) 00:14:14.718 4.613 - 4.640: 99.2408% ( 1) 00:14:14.718 4.640 - 4.667: 99.2475% ( 1) 00:14:14.718 4.667 - 4.693: 99.2542% ( 1) 00:14:14.718 4.693 - 4.720: 99.2677% ( 2) 00:14:14.718 4.720 - 4.747: 99.2811% ( 2) 00:14:14.718 4.773 - 4.800: 99.2878% ( 1) 00:14:14.718 4.800 - 4.827: 99.2945% ( 1) 00:14:14.718 4.853 - 4.880: 99.3013% ( 1) 00:14:14.718 4.880 - 4.907: 99.3080% ( 1) 00:14:14.718 4.960 - 4.987: 99.3147% ( 1) 00:14:14.718 5.040 - 5.067: 99.3214% ( 1) 00:14:14.718 5.200 - 5.227: 99.3281% ( 1) 00:14:14.718 5.333 - 5.360: 99.3349% ( 1) 00:14:14.718 5.387 - 5.413: 99.3416% ( 1) 00:14:14.718 5.413 - 5.440: 99.3483% ( 1) 00:14:14.718 5.467 - 5.493: 99.3550% ( 1) 00:14:14.718 5.493 - 5.520: 99.3617% ( 1) 00:14:14.718 5.547 - 5.573: 99.3684% ( 1) 00:14:14.718 5.627 - 5.653: 99.3752% ( 1) 00:14:14.718 5.707 - 5.733: 99.3819% ( 1) 00:14:14.718 5.813 - 5.840: 99.3953% ( 2) 00:14:14.718 6.080 - 6.107: 99.4020% ( 1) 00:14:14.718 9.920 - 9.973: 99.4088% ( 1) 00:14:14.718 10.080 - 10.133: 99.4155% ( 1) 00:14:14.718 10.453 - 10.507: 99.4222% ( 1) 00:14:14.718 1003.520 - 1010.347: 99.4289% ( 1) 00:14:14.718 1140.053 - 1146.880: 99.4356% ( 1) 00:14:14.718 1993.387 - 2007.040: 99.4424% ( 1) 00:14:14.718 2990.080 - 3003.733: 99.4491% ( 1) 00:14:14.718 3003.733 - 3017.387: 99.4558% ( 1) 00:14:14.718 3986.773 - 4014.080: 99.9866% ( 79) 00:14:14.718 4969.813 - 4997.120: 99.9933% ( 1) 00:14:14.718 7973.547 - 8028.160: 100.0000% ( 1) 00:14:14.718 00:14:14.718 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:14.718 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:14.718 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:14.718 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:14.718 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:14.718 [ 00:14:14.718 { 00:14:14.718 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:14.718 "subtype": "Discovery", 00:14:14.718 "listen_addresses": [], 00:14:14.718 "allow_any_host": true, 00:14:14.718 "hosts": [] 00:14:14.718 }, 00:14:14.718 { 00:14:14.718 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:14.718 "subtype": "NVMe", 00:14:14.718 "listen_addresses": [ 00:14:14.718 { 00:14:14.718 "trtype": "VFIOUSER", 00:14:14.718 "adrfam": "IPv4", 00:14:14.718 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:14.718 "trsvcid": "0" 00:14:14.718 } 00:14:14.718 ], 00:14:14.718 "allow_any_host": true, 00:14:14.718 "hosts": [], 00:14:14.718 "serial_number": "SPDK1", 00:14:14.718 "model_number": "SPDK bdev Controller", 00:14:14.718 "max_namespaces": 32, 00:14:14.718 "min_cntlid": 1, 00:14:14.718 "max_cntlid": 65519, 00:14:14.718 "namespaces": [ 00:14:14.718 { 00:14:14.718 "nsid": 1, 00:14:14.718 "bdev_name": "Malloc1", 00:14:14.718 "name": "Malloc1", 00:14:14.718 "nguid": "B87CAD9BE9BE4BA897CF1A29B41130D8", 00:14:14.718 "uuid": "b87cad9b-e9be-4ba8-97cf-1a29b41130d8" 00:14:14.718 }, 00:14:14.718 { 00:14:14.718 "nsid": 2, 00:14:14.718 "bdev_name": "Malloc3", 00:14:14.718 "name": "Malloc3", 00:14:14.718 "nguid": "44DBE2E142DF4159AEB0412F40B0CCA3", 00:14:14.718 "uuid": "44dbe2e1-42df-4159-aeb0-412f40b0cca3" 00:14:14.718 } 00:14:14.718 ] 00:14:14.718 }, 00:14:14.718 { 00:14:14.718 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:14.718 "subtype": "NVMe", 00:14:14.718 "listen_addresses": [ 00:14:14.718 { 00:14:14.718 "trtype": "VFIOUSER", 00:14:14.718 "adrfam": "IPv4", 00:14:14.718 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:14.718 "trsvcid": "0" 00:14:14.718 } 00:14:14.718 ], 00:14:14.718 "allow_any_host": true, 00:14:14.718 "hosts": [], 00:14:14.718 "serial_number": "SPDK2", 00:14:14.718 "model_number": "SPDK bdev Controller", 00:14:14.718 "max_namespaces": 32, 00:14:14.718 "min_cntlid": 1, 00:14:14.718 "max_cntlid": 65519, 00:14:14.718 "namespaces": [ 00:14:14.718 { 00:14:14.718 "nsid": 1, 00:14:14.718 "bdev_name": "Malloc2", 00:14:14.718 "name": "Malloc2", 00:14:14.718 "nguid": "2DD85458170B4344A034DE8C1FF612DF", 00:14:14.718 "uuid": "2dd85458-170b-4344-a034-de8c1ff612df" 00:14:14.718 } 00:14:14.718 ] 00:14:14.718 } 00:14:14.718 ] 00:14:14.718 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:14.718 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=615685 00:14:14.718 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:14.718 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:14.718 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:14.718 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:14.718 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:14.718 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:14.718 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:14.718 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:14.718 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.979 Malloc4 00:14:14.979 [2024-07-15 15:19:24.399271] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:14.979 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:14.979 [2024-07-15 15:19:24.564336] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:14.979 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:15.240 Asynchronous Event Request test 00:14:15.240 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:15.240 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:15.240 Registering asynchronous event callbacks... 00:14:15.240 Starting namespace attribute notice tests for all controllers... 00:14:15.240 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:15.240 aer_cb - Changed Namespace 00:14:15.240 Cleaning up... 00:14:15.240 [ 00:14:15.240 { 00:14:15.240 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:15.240 "subtype": "Discovery", 00:14:15.240 "listen_addresses": [], 00:14:15.240 "allow_any_host": true, 00:14:15.240 "hosts": [] 00:14:15.240 }, 00:14:15.240 { 00:14:15.240 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:15.240 "subtype": "NVMe", 00:14:15.240 "listen_addresses": [ 00:14:15.240 { 00:14:15.240 "trtype": "VFIOUSER", 00:14:15.240 "adrfam": "IPv4", 00:14:15.240 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:15.240 "trsvcid": "0" 00:14:15.240 } 00:14:15.240 ], 00:14:15.240 "allow_any_host": true, 00:14:15.240 "hosts": [], 00:14:15.240 "serial_number": "SPDK1", 00:14:15.240 "model_number": "SPDK bdev Controller", 00:14:15.240 "max_namespaces": 32, 00:14:15.240 "min_cntlid": 1, 00:14:15.240 "max_cntlid": 65519, 00:14:15.240 "namespaces": [ 00:14:15.240 { 00:14:15.240 "nsid": 1, 00:14:15.240 "bdev_name": "Malloc1", 00:14:15.240 "name": "Malloc1", 00:14:15.240 "nguid": "B87CAD9BE9BE4BA897CF1A29B41130D8", 00:14:15.240 "uuid": "b87cad9b-e9be-4ba8-97cf-1a29b41130d8" 00:14:15.240 }, 00:14:15.240 { 00:14:15.240 "nsid": 2, 00:14:15.240 "bdev_name": "Malloc3", 00:14:15.240 "name": "Malloc3", 00:14:15.240 "nguid": "44DBE2E142DF4159AEB0412F40B0CCA3", 00:14:15.240 "uuid": "44dbe2e1-42df-4159-aeb0-412f40b0cca3" 00:14:15.240 } 00:14:15.240 ] 00:14:15.240 }, 00:14:15.240 { 00:14:15.240 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:15.240 "subtype": "NVMe", 00:14:15.240 "listen_addresses": [ 00:14:15.240 { 00:14:15.240 "trtype": "VFIOUSER", 00:14:15.240 "adrfam": "IPv4", 00:14:15.240 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:15.240 "trsvcid": "0" 00:14:15.240 } 00:14:15.240 ], 00:14:15.240 "allow_any_host": true, 00:14:15.240 "hosts": [], 00:14:15.240 "serial_number": "SPDK2", 00:14:15.240 "model_number": "SPDK bdev Controller", 00:14:15.240 "max_namespaces": 32, 00:14:15.240 "min_cntlid": 1, 00:14:15.240 "max_cntlid": 65519, 00:14:15.240 "namespaces": [ 00:14:15.240 { 00:14:15.240 "nsid": 1, 00:14:15.240 "bdev_name": "Malloc2", 00:14:15.240 "name": "Malloc2", 00:14:15.240 "nguid": "2DD85458170B4344A034DE8C1FF612DF", 00:14:15.240 "uuid": "2dd85458-170b-4344-a034-de8c1ff612df" 00:14:15.240 }, 00:14:15.240 { 00:14:15.240 "nsid": 2, 00:14:15.240 "bdev_name": "Malloc4", 00:14:15.240 "name": "Malloc4", 00:14:15.240 "nguid": "764F524C1C294A7FA1A84BFC25F2B4CE", 00:14:15.240 "uuid": "764f524c-1c29-4a7f-a1a8-4bfc25f2b4ce" 00:14:15.240 } 00:14:15.240 ] 00:14:15.240 } 00:14:15.240 ] 00:14:15.240 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 615685 00:14:15.240 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:15.240 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 606624 00:14:15.240 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 606624 ']' 00:14:15.240 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 606624 00:14:15.240 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:14:15.240 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.240 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 606624 00:14:15.240 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:15.240 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:15.240 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 606624' 00:14:15.240 killing process with pid 606624 00:14:15.240 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 606624 00:14:15.240 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 606624 00:14:15.500 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:15.500 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:15.500 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:15.500 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:15.500 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:15.500 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=615973 00:14:15.500 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 615973' 00:14:15.500 Process pid: 615973 00:14:15.500 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:15.500 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:15.500 15:19:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 615973 00:14:15.500 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 615973 ']' 00:14:15.500 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.500 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.501 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.501 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.501 15:19:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:15.501 [2024-07-15 15:19:25.044394] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:15.501 [2024-07-15 15:19:25.045717] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:14:15.501 [2024-07-15 15:19:25.045770] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.501 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.501 [2024-07-15 15:19:25.116296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:15.761 [2024-07-15 15:19:25.181245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.761 [2024-07-15 15:19:25.181283] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.761 [2024-07-15 15:19:25.181290] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.761 [2024-07-15 15:19:25.181297] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.761 [2024-07-15 15:19:25.181302] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.761 [2024-07-15 15:19:25.181353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.761 [2024-07-15 15:19:25.181441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.761 [2024-07-15 15:19:25.181582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.761 [2024-07-15 15:19:25.181583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.761 [2024-07-15 15:19:25.243851] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:15.761 [2024-07-15 15:19:25.243989] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:15.761 [2024-07-15 15:19:25.244055] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:15.761 [2024-07-15 15:19:25.244729] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:15.761 [2024-07-15 15:19:25.244899] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:16.333 15:19:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.333 15:19:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:14:16.333 15:19:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:17.272 15:19:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:17.533 15:19:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:17.533 15:19:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:17.533 15:19:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:17.533 15:19:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:17.533 15:19:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:17.794 Malloc1 00:14:17.794 15:19:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:17.794 15:19:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:18.054 15:19:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:18.054 15:19:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:18.054 15:19:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:18.054 15:19:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:18.315 Malloc2 00:14:18.315 15:19:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:18.575 15:19:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:18.575 15:19:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:18.834 15:19:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:18.834 15:19:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 615973 00:14:18.834 15:19:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 615973 ']' 00:14:18.834 15:19:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 615973 00:14:18.834 15:19:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:14:18.834 15:19:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:18.834 15:19:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 615973 00:14:18.834 15:19:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:18.834 15:19:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:18.834 15:19:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 615973' 00:14:18.834 killing process with pid 615973 00:14:18.834 15:19:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 615973 00:14:18.834 15:19:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 615973 00:14:19.094 15:19:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:19.094 15:19:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:19.094 00:14:19.094 real 0m50.734s 00:14:19.094 user 3m21.173s 00:14:19.094 sys 0m3.030s 00:14:19.094 15:19:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:19.094 15:19:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:19.094 ************************************ 00:14:19.094 END TEST nvmf_vfio_user 00:14:19.094 ************************************ 00:14:19.094 15:19:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:19.094 15:19:28 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:19.094 15:19:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:19.094 15:19:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.094 15:19:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:19.094 ************************************ 00:14:19.094 START TEST nvmf_vfio_user_nvme_compliance 00:14:19.094 ************************************ 00:14:19.094 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:19.354 * Looking for test storage... 00:14:19.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=616725 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 616725' 00:14:19.354 Process pid: 616725 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 616725 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 616725 ']' 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.354 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.355 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.355 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.355 15:19:28 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:19.355 [2024-07-15 15:19:28.829327] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:14:19.355 [2024-07-15 15:19:28.829394] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.355 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.355 [2024-07-15 15:19:28.901450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:19.615 [2024-07-15 15:19:28.974820] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.615 [2024-07-15 15:19:28.974860] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.615 [2024-07-15 15:19:28.974867] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.615 [2024-07-15 15:19:28.974873] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.615 [2024-07-15 15:19:28.974879] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.615 [2024-07-15 15:19:28.974982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.615 [2024-07-15 15:19:28.975226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.615 [2024-07-15 15:19:28.975231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.190 15:19:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.190 15:19:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:14:20.190 15:19:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:21.127 malloc0 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.127 15:19:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:21.386 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.386 00:14:21.386 00:14:21.386 CUnit - A unit testing framework for C - Version 2.1-3 00:14:21.386 http://cunit.sourceforge.net/ 00:14:21.386 00:14:21.386 00:14:21.386 Suite: nvme_compliance 00:14:21.386 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 15:19:30.866372] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.386 [2024-07-15 15:19:30.867764] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:21.386 [2024-07-15 15:19:30.867780] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:21.386 [2024-07-15 15:19:30.867786] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:21.386 [2024-07-15 15:19:30.869387] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.386 passed 00:14:21.386 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 15:19:30.965012] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.386 [2024-07-15 15:19:30.969030] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.646 passed 00:14:21.646 Test: admin_identify_ns ...[2024-07-15 15:19:31.064166] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.646 [2024-07-15 15:19:31.123900] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:21.646 [2024-07-15 15:19:31.131901] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:21.646 [2024-07-15 15:19:31.153017] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.646 passed 00:14:21.646 Test: admin_get_features_mandatory_features ...[2024-07-15 15:19:31.247505] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.646 [2024-07-15 15:19:31.250523] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.906 passed 00:14:21.906 Test: admin_get_features_optional_features ...[2024-07-15 15:19:31.346076] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.906 [2024-07-15 15:19:31.349091] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:21.906 passed 00:14:21.906 Test: admin_set_features_number_of_queues ...[2024-07-15 15:19:31.440195] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:22.167 [2024-07-15 15:19:31.544992] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:22.167 passed 00:14:22.167 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 15:19:31.638012] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:22.167 [2024-07-15 15:19:31.642039] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:22.167 passed 00:14:22.167 Test: admin_get_log_page_with_lpo ...[2024-07-15 15:19:31.734135] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:22.426 [2024-07-15 15:19:31.801896] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:22.426 [2024-07-15 15:19:31.814969] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:22.426 passed 00:14:22.426 Test: fabric_property_get ...[2024-07-15 15:19:31.909022] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:22.426 [2024-07-15 15:19:31.910291] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:22.426 [2024-07-15 15:19:31.912042] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:22.426 passed 00:14:22.426 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 15:19:32.005600] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:22.426 [2024-07-15 15:19:32.006862] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:22.426 [2024-07-15 15:19:32.010629] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:22.687 passed 00:14:22.687 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 15:19:32.102776] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:22.687 [2024-07-15 15:19:32.184892] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:22.687 [2024-07-15 15:19:32.200888] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:22.687 [2024-07-15 15:19:32.205979] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:22.687 passed 00:14:22.687 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 15:19:32.299989] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:22.687 [2024-07-15 15:19:32.301230] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:22.687 [2024-07-15 15:19:32.303008] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:22.946 passed 00:14:22.946 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 15:19:32.396131] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:22.946 [2024-07-15 15:19:32.471896] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:22.946 [2024-07-15 15:19:32.495891] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:22.946 [2024-07-15 15:19:32.500978] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:22.946 passed 00:14:23.206 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 15:19:32.594953] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:23.206 [2024-07-15 15:19:32.596188] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:23.206 [2024-07-15 15:19:32.596208] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:23.206 [2024-07-15 15:19:32.597970] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:23.206 passed 00:14:23.206 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 15:19:32.691136] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:23.206 [2024-07-15 15:19:32.782893] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:23.206 [2024-07-15 15:19:32.790891] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:23.206 [2024-07-15 15:19:32.798891] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:23.206 [2024-07-15 15:19:32.806890] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:23.465 [2024-07-15 15:19:32.835982] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:23.465 passed 00:14:23.465 Test: admin_create_io_sq_verify_pc ...[2024-07-15 15:19:32.931018] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:23.465 [2024-07-15 15:19:32.949900] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:23.465 [2024-07-15 15:19:32.967152] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:23.465 passed 00:14:23.465 Test: admin_create_io_qp_max_qps ...[2024-07-15 15:19:33.055695] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:24.847 [2024-07-15 15:19:34.158893] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:25.108 [2024-07-15 15:19:34.532905] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:25.108 passed 00:14:25.108 Test: admin_create_io_sq_shared_cq ...[2024-07-15 15:19:34.625131] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:25.369 [2024-07-15 15:19:34.757891] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:25.369 [2024-07-15 15:19:34.794946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:25.369 passed 00:14:25.369 00:14:25.369 Run Summary: Type Total Ran Passed Failed Inactive 00:14:25.369 suites 1 1 n/a 0 0 00:14:25.369 tests 18 18 18 0 0 00:14:25.369 asserts 360 360 360 0 n/a 00:14:25.369 00:14:25.369 Elapsed time = 1.645 seconds 00:14:25.369 15:19:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 616725 00:14:25.369 15:19:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 616725 ']' 00:14:25.369 15:19:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 616725 00:14:25.369 15:19:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:14:25.369 15:19:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:25.369 15:19:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 616725 00:14:25.369 15:19:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:25.369 15:19:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:25.369 15:19:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 616725' 00:14:25.369 killing process with pid 616725 00:14:25.369 15:19:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 616725 00:14:25.369 15:19:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 616725 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:25.631 00:14:25.631 real 0m6.412s 00:14:25.631 user 0m18.331s 00:14:25.631 sys 0m0.463s 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:25.631 ************************************ 00:14:25.631 END TEST nvmf_vfio_user_nvme_compliance 00:14:25.631 ************************************ 00:14:25.631 15:19:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:25.631 15:19:35 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:25.631 15:19:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:25.631 15:19:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.631 15:19:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:25.631 ************************************ 00:14:25.631 START TEST nvmf_vfio_user_fuzz 00:14:25.631 ************************************ 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:25.631 * Looking for test storage... 00:14:25.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:25.631 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:25.632 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:25.632 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:25.632 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:25.632 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:25.632 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:25.632 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:25.632 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:25.632 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:25.893 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=618117 00:14:25.893 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 618117' 00:14:25.893 Process pid: 618117 00:14:25.893 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:25.893 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:25.893 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 618117 00:14:25.893 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 618117 ']' 00:14:25.893 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.893 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:25.893 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.893 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:25.893 15:19:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:26.463 15:19:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:26.463 15:19:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:14:26.463 15:19:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:27.847 malloc0 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:27.847 15:19:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:59.975 Fuzzing completed. Shutting down the fuzz application 00:14:59.975 00:14:59.975 Dumping successful admin opcodes: 00:14:59.975 8, 9, 10, 24, 00:14:59.975 Dumping successful io opcodes: 00:14:59.975 0, 00:14:59.975 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1048416, total successful commands: 4131, random_seed: 3338271296 00:14:59.975 NS: 0x200003a1ef00 admin qp, Total commands completed: 260207, total successful commands: 2096, random_seed: 1869008960 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 618117 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 618117 ']' 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 618117 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 618117 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 618117' 00:14:59.976 killing process with pid 618117 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 618117 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 618117 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:59.976 00:14:59.976 real 0m32.672s 00:14:59.976 user 0m39.081s 00:14:59.976 sys 0m22.563s 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.976 15:20:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:59.976 ************************************ 00:14:59.976 END TEST nvmf_vfio_user_fuzz 00:14:59.976 ************************************ 00:14:59.976 15:20:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:59.976 15:20:07 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:59.976 15:20:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:59.976 15:20:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.976 15:20:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:59.976 ************************************ 00:14:59.976 START TEST nvmf_host_management 00:14:59.976 ************************************ 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:59.976 * Looking for test storage... 00:14:59.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.976 15:20:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:59.976 15:20:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:06.563 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:06.563 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:15:06.563 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:06.563 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:06.563 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:06.563 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:06.563 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:06.563 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:15:06.563 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:06.563 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:15:06.563 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:15:06.563 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:06.564 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:06.564 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:06.564 Found net devices under 0000:31:00.0: cvl_0_0 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:06.564 Found net devices under 0000:31:00.1: cvl_0_1 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:06.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:15:06.564 00:15:06.564 --- 10.0.0.2 ping statistics --- 00:15:06.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.564 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:06.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:15:06.564 00:15:06.564 --- 10.0.0.1 ping statistics --- 00:15:06.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.564 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:06.564 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=628327 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 628327 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 628327 ']' 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.565 15:20:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:06.565 [2024-07-15 15:20:15.852225] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:15:06.565 [2024-07-15 15:20:15.852288] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.565 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.565 [2024-07-15 15:20:15.929975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:06.565 [2024-07-15 15:20:16.005133] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.565 [2024-07-15 15:20:16.005171] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.565 [2024-07-15 15:20:16.005179] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.565 [2024-07-15 15:20:16.005185] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.565 [2024-07-15 15:20:16.005191] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.565 [2024-07-15 15:20:16.005320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.565 [2024-07-15 15:20:16.005473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:06.565 [2024-07-15 15:20:16.005819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:06.565 [2024-07-15 15:20:16.005820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:07.138 [2024-07-15 15:20:16.678449] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:07.138 Malloc0 00:15:07.138 [2024-07-15 15:20:16.741792] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.138 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=628694 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 628694 /var/tmp/bdevperf.sock 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 628694 ']' 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:07.399 { 00:15:07.399 "params": { 00:15:07.399 "name": "Nvme$subsystem", 00:15:07.399 "trtype": "$TEST_TRANSPORT", 00:15:07.399 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:07.399 "adrfam": "ipv4", 00:15:07.399 "trsvcid": "$NVMF_PORT", 00:15:07.399 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:07.399 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:07.399 "hdgst": ${hdgst:-false}, 00:15:07.399 "ddgst": ${ddgst:-false} 00:15:07.399 }, 00:15:07.399 "method": "bdev_nvme_attach_controller" 00:15:07.399 } 00:15:07.399 EOF 00:15:07.399 )") 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:07.399 15:20:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:07.399 "params": { 00:15:07.399 "name": "Nvme0", 00:15:07.399 "trtype": "tcp", 00:15:07.399 "traddr": "10.0.0.2", 00:15:07.399 "adrfam": "ipv4", 00:15:07.399 "trsvcid": "4420", 00:15:07.399 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:07.399 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:07.399 "hdgst": false, 00:15:07.399 "ddgst": false 00:15:07.399 }, 00:15:07.399 "method": "bdev_nvme_attach_controller" 00:15:07.399 }' 00:15:07.399 [2024-07-15 15:20:16.843854] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:15:07.399 [2024-07-15 15:20:16.843911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628694 ] 00:15:07.399 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.399 [2024-07-15 15:20:16.906731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.399 [2024-07-15 15:20:16.971624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.659 Running I/O for 10 seconds... 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=583 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 583 -ge 100 ']' 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.230 15:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:08.230 [2024-07-15 15:20:17.696961] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.230 [2024-07-15 15:20:17.697006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.230 [2024-07-15 15:20:17.697014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.230 [2024-07-15 15:20:17.697020] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.230 [2024-07-15 15:20:17.697027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.230 [2024-07-15 15:20:17.697034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.230 [2024-07-15 15:20:17.697041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.230 [2024-07-15 15:20:17.697048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.230 [2024-07-15 15:20:17.697054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.230 [2024-07-15 15:20:17.697062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.230 [2024-07-15 15:20:17.697069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.230 [2024-07-15 15:20:17.697081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.230 [2024-07-15 15:20:17.697088] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697169] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697215] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697230] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697346] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697353] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697402] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3d0b0 is same with the state(5) to be set 00:15:08.231 [2024-07-15 15:20:17.697729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.697765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.697790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.697798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.697808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.697816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.697826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.697833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.697842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.697850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.697859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.697866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.697876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.697890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.697901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.697908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.697917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.697924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.697934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.697941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.697950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.697958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.697967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.697974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.697985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.697992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.698001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.698011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.698021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.698028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.698037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.698044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.698054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.698061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.698070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.231 [2024-07-15 15:20:17.698077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.231 [2024-07-15 15:20:17.698087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.232 [2024-07-15 15:20:17.698784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.232 [2024-07-15 15:20:17.698791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.233 [2024-07-15 15:20:17.698800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.233 [2024-07-15 15:20:17.698807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.233 [2024-07-15 15:20:17.698816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.233 [2024-07-15 15:20:17.698823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.233 [2024-07-15 15:20:17.698832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:08.233 [2024-07-15 15:20:17.698839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.233 [2024-07-15 15:20:17.698849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1150360 is same with the state(5) to be set 00:15:08.233 [2024-07-15 15:20:17.698896] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1150360 was disconnected and freed. reset controller. 00:15:08.233 [2024-07-15 15:20:17.700117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:08.233 15:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.233 task offset: 82432 on job bdev=Nvme0n1 fails 00:15:08.233 00:15:08.233 Latency(us) 00:15:08.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.233 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:08.233 Job: Nvme0n1 ended in about 0.46 seconds with error 00:15:08.233 Verification LBA range: start 0x0 length 0x400 00:15:08.233 Nvme0n1 : 0.46 1401.92 87.62 139.32 0.00 40359.91 7099.73 35170.99 00:15:08.233 =================================================================================================================== 00:15:08.233 Total : 1401.92 87.62 139.32 0.00 40359.91 7099.73 35170.99 00:15:08.233 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:08.233 [2024-07-15 15:20:17.702304] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:08.233 [2024-07-15 15:20:17.702328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf6120 (9): Bad file descriptor 00:15:08.233 15:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.233 15:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:08.233 [2024-07-15 15:20:17.704530] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:15:08.233 [2024-07-15 15:20:17.704618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:08.233 [2024-07-15 15:20:17.704647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:08.233 [2024-07-15 15:20:17.704663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:15:08.233 [2024-07-15 15:20:17.704671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:15:08.233 [2024-07-15 15:20:17.704679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:15:08.233 [2024-07-15 15:20:17.704686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xcf6120 00:15:08.233 [2024-07-15 15:20:17.704707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf6120 (9): Bad file descriptor 00:15:08.233 [2024-07-15 15:20:17.704719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:08.233 [2024-07-15 15:20:17.704726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:08.233 [2024-07-15 15:20:17.704733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:08.233 [2024-07-15 15:20:17.704747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:08.233 15:20:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.233 15:20:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:15:09.173 15:20:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 628694 00:15:09.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (628694) - No such process 00:15:09.173 15:20:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:15:09.173 15:20:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:09.173 15:20:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:09.173 15:20:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:09.173 15:20:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:15:09.173 15:20:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:15:09.173 15:20:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:09.173 15:20:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:09.173 { 00:15:09.173 "params": { 00:15:09.173 "name": "Nvme$subsystem", 00:15:09.173 "trtype": "$TEST_TRANSPORT", 00:15:09.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:09.173 "adrfam": "ipv4", 00:15:09.173 "trsvcid": "$NVMF_PORT", 00:15:09.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:09.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:09.173 "hdgst": ${hdgst:-false}, 00:15:09.173 "ddgst": ${ddgst:-false} 00:15:09.173 }, 00:15:09.173 "method": "bdev_nvme_attach_controller" 00:15:09.173 } 00:15:09.173 EOF 00:15:09.173 )") 00:15:09.173 15:20:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:15:09.173 15:20:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:15:09.173 15:20:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:15:09.173 15:20:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:09.173 "params": { 00:15:09.173 "name": "Nvme0", 00:15:09.173 "trtype": "tcp", 00:15:09.173 "traddr": "10.0.0.2", 00:15:09.173 "adrfam": "ipv4", 00:15:09.173 "trsvcid": "4420", 00:15:09.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:09.173 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:09.173 "hdgst": false, 00:15:09.173 "ddgst": false 00:15:09.173 }, 00:15:09.173 "method": "bdev_nvme_attach_controller" 00:15:09.174 }' 00:15:09.174 [2024-07-15 15:20:18.780301] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:15:09.174 [2024-07-15 15:20:18.780360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid629051 ] 00:15:09.434 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.434 [2024-07-15 15:20:18.844842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.434 [2024-07-15 15:20:18.909518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.695 Running I/O for 1 seconds... 00:15:10.636 00:15:10.636 Latency(us) 00:15:10.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.636 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:10.636 Verification LBA range: start 0x0 length 0x400 00:15:10.636 Nvme0n1 : 1.02 1687.72 105.48 0.00 0.00 37241.46 6171.31 32986.45 00:15:10.636 =================================================================================================================== 00:15:10.636 Total : 1687.72 105.48 0.00 0.00 37241.46 6171.31 32986.45 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:10.896 rmmod nvme_tcp 00:15:10.896 rmmod nvme_fabrics 00:15:10.896 rmmod nvme_keyring 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 628327 ']' 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 628327 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 628327 ']' 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 628327 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 628327 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 628327' 00:15:10.896 killing process with pid 628327 00:15:10.896 15:20:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 628327 00:15:10.897 15:20:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 628327 00:15:11.180 [2024-07-15 15:20:20.625641] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:11.180 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:11.180 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:11.180 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:11.180 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:11.180 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:11.180 15:20:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.180 15:20:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.180 15:20:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.723 15:20:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:13.723 15:20:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:13.723 00:15:13.723 real 0m14.849s 00:15:13.723 user 0m23.352s 00:15:13.723 sys 0m6.709s 00:15:13.723 15:20:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:13.723 15:20:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:13.723 ************************************ 00:15:13.723 END TEST nvmf_host_management 00:15:13.723 ************************************ 00:15:13.723 15:20:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:13.723 15:20:22 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:13.723 15:20:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:13.723 15:20:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:13.723 15:20:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:13.723 ************************************ 00:15:13.723 START TEST nvmf_lvol 00:15:13.723 ************************************ 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:13.723 * Looking for test storage... 00:15:13.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.723 15:20:22 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:15:13.724 15:20:22 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:21.920 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:21.921 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:21.921 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:21.921 Found net devices under 0000:31:00.0: cvl_0_0 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:21.921 Found net devices under 0000:31:00.1: cvl_0_1 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:21.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:15:21.921 00:15:21.921 --- 10.0.0.2 ping statistics --- 00:15:21.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.921 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:21.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:15:21.921 00:15:21.921 --- 10.0.0.1 ping statistics --- 00:15:21.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.921 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=633942 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 633942 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 633942 ']' 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:21.921 15:20:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:21.921 [2024-07-15 15:20:30.770416] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:15:21.921 [2024-07-15 15:20:30.770469] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.921 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.921 [2024-07-15 15:20:30.844251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:21.921 [2024-07-15 15:20:30.912643] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.921 [2024-07-15 15:20:30.912681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.921 [2024-07-15 15:20:30.912688] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.921 [2024-07-15 15:20:30.912694] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.921 [2024-07-15 15:20:30.912700] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.921 [2024-07-15 15:20:30.912807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.921 [2024-07-15 15:20:30.912923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.921 [2024-07-15 15:20:30.912927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.205 15:20:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:22.205 15:20:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:15:22.205 15:20:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:22.205 15:20:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:22.205 15:20:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:22.205 15:20:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.205 15:20:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:22.205 [2024-07-15 15:20:31.724567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.205 15:20:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:22.465 15:20:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:22.465 15:20:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:22.724 15:20:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:22.724 15:20:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:22.724 15:20:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:22.983 15:20:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0faa461e-74d2-4e16-abf7-eeb3c8589aae 00:15:22.983 15:20:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0faa461e-74d2-4e16-abf7-eeb3c8589aae lvol 20 00:15:23.243 15:20:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c413ded7-3f2a-4ec8-a31f-e1cd6bc62f63 00:15:23.243 15:20:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:23.243 15:20:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c413ded7-3f2a-4ec8-a31f-e1cd6bc62f63 00:15:23.502 15:20:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:23.762 [2024-07-15 15:20:33.130566] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.762 15:20:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:23.762 15:20:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=634350 00:15:23.762 15:20:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:23.762 15:20:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:23.762 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.140 15:20:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c413ded7-3f2a-4ec8-a31f-e1cd6bc62f63 MY_SNAPSHOT 00:15:25.140 15:20:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=37d165d4-6774-4ad0-badc-f00186cf924f 00:15:25.140 15:20:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c413ded7-3f2a-4ec8-a31f-e1cd6bc62f63 30 00:15:25.140 15:20:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 37d165d4-6774-4ad0-badc-f00186cf924f MY_CLONE 00:15:25.400 15:20:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=143e0dac-80f9-4936-9cad-34f4de44e1d1 00:15:25.400 15:20:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 143e0dac-80f9-4936-9cad-34f4de44e1d1 00:15:25.971 15:20:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 634350 00:15:34.110 Initializing NVMe Controllers 00:15:34.110 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:34.110 Controller IO queue size 128, less than required. 00:15:34.110 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:34.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:34.110 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:34.110 Initialization complete. Launching workers. 00:15:34.110 ======================================================== 00:15:34.110 Latency(us) 00:15:34.110 Device Information : IOPS MiB/s Average min max 00:15:34.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12361.90 48.29 10359.36 1531.26 49349.19 00:15:34.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12429.80 48.55 10303.32 3907.56 54829.58 00:15:34.110 ======================================================== 00:15:34.110 Total : 24791.70 96.84 10331.26 1531.26 54829.58 00:15:34.110 00:15:34.110 15:20:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:34.370 15:20:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c413ded7-3f2a-4ec8-a31f-e1cd6bc62f63 00:15:34.370 15:20:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0faa461e-74d2-4e16-abf7-eeb3c8589aae 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:34.631 rmmod nvme_tcp 00:15:34.631 rmmod nvme_fabrics 00:15:34.631 rmmod nvme_keyring 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 633942 ']' 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 633942 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 633942 ']' 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 633942 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:34.631 15:20:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 633942 00:15:34.892 15:20:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:34.892 15:20:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:34.892 15:20:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 633942' 00:15:34.892 killing process with pid 633942 00:15:34.892 15:20:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 633942 00:15:34.892 15:20:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 633942 00:15:34.892 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.892 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.892 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.892 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.892 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.892 15:20:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.892 15:20:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.892 15:20:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:37.437 00:15:37.437 real 0m23.698s 00:15:37.437 user 1m3.852s 00:15:37.437 sys 0m8.018s 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:37.437 ************************************ 00:15:37.437 END TEST nvmf_lvol 00:15:37.437 ************************************ 00:15:37.437 15:20:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:37.437 15:20:46 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:37.437 15:20:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:37.437 15:20:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:37.437 15:20:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:37.437 ************************************ 00:15:37.437 START TEST nvmf_lvs_grow 00:15:37.437 ************************************ 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:37.437 * Looking for test storage... 00:15:37.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:15:37.437 15:20:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:45.575 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:45.575 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:45.575 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:45.575 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:45.575 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:45.575 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:45.575 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:45.575 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:45.575 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:45.575 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:45.576 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:45.576 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:45.576 Found net devices under 0000:31:00.0: cvl_0_0 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:45.576 Found net devices under 0000:31:00.1: cvl_0_1 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:45.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.740 ms 00:15:45.576 00:15:45.576 --- 10.0.0.2 ping statistics --- 00:15:45.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.576 rtt min/avg/max/mdev = 0.740/0.740/0.740/0.000 ms 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:45.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:15:45.576 00:15:45.576 --- 10.0.0.1 ping statistics --- 00:15:45.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.576 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=641163 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 641163 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 641163 ']' 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.576 15:20:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:45.577 [2024-07-15 15:20:54.647571] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:15:45.577 [2024-07-15 15:20:54.647635] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.577 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.577 [2024-07-15 15:20:54.726899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.577 [2024-07-15 15:20:54.799692] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.577 [2024-07-15 15:20:54.799733] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.577 [2024-07-15 15:20:54.799740] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:45.577 [2024-07-15 15:20:54.799747] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:45.577 [2024-07-15 15:20:54.799753] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.577 [2024-07-15 15:20:54.799773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.837 15:20:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:45.837 15:20:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:15:45.837 15:20:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:45.837 15:20:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:45.837 15:20:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:45.837 15:20:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:45.837 15:20:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:46.099 [2024-07-15 15:20:55.591041] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.099 15:20:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:46.099 15:20:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:46.099 15:20:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.099 15:20:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:46.099 ************************************ 00:15:46.099 START TEST lvs_grow_clean 00:15:46.099 ************************************ 00:15:46.099 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:15:46.099 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:46.099 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:46.099 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:46.099 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:46.099 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:46.099 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:46.099 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:46.099 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:46.099 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:46.360 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:46.360 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:46.621 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=af837f51-df98-4531-b835-1f5f38828db5 00:15:46.621 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af837f51-df98-4531-b835-1f5f38828db5 00:15:46.621 15:20:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:46.621 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:46.621 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:46.622 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u af837f51-df98-4531-b835-1f5f38828db5 lvol 150 00:15:46.884 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=06053c62-7437-4cd6-a210-1fe2126a5585 00:15:46.884 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:46.884 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:46.884 [2024-07-15 15:20:56.444896] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:46.884 [2024-07-15 15:20:56.444950] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:46.884 true 00:15:46.884 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af837f51-df98-4531-b835-1f5f38828db5 00:15:46.884 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:47.144 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:47.144 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:47.144 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 06053c62-7437-4cd6-a210-1fe2126a5585 00:15:47.404 15:20:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:47.701 [2024-07-15 15:20:57.054762] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.701 15:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:47.701 15:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=641597 00:15:47.701 15:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:47.701 15:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:47.701 15:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 641597 /var/tmp/bdevperf.sock 00:15:47.701 15:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 641597 ']' 00:15:47.701 15:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:47.701 15:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.701 15:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:47.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:47.701 15:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.701 15:20:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:47.701 [2024-07-15 15:20:57.267699] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:15:47.701 [2024-07-15 15:20:57.267765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid641597 ] 00:15:47.701 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.961 [2024-07-15 15:20:57.330713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.961 [2024-07-15 15:20:57.395767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.531 15:20:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.531 15:20:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:15:48.531 15:20:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:48.791 Nvme0n1 00:15:48.791 15:20:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:49.051 [ 00:15:49.051 { 00:15:49.051 "name": "Nvme0n1", 00:15:49.051 "aliases": [ 00:15:49.051 "06053c62-7437-4cd6-a210-1fe2126a5585" 00:15:49.052 ], 00:15:49.052 "product_name": "NVMe disk", 00:15:49.052 "block_size": 4096, 00:15:49.052 "num_blocks": 38912, 00:15:49.052 "uuid": "06053c62-7437-4cd6-a210-1fe2126a5585", 00:15:49.052 "assigned_rate_limits": { 00:15:49.052 "rw_ios_per_sec": 0, 00:15:49.052 "rw_mbytes_per_sec": 0, 00:15:49.052 "r_mbytes_per_sec": 0, 00:15:49.052 "w_mbytes_per_sec": 0 00:15:49.052 }, 00:15:49.052 "claimed": false, 00:15:49.052 "zoned": false, 00:15:49.052 "supported_io_types": { 00:15:49.052 "read": true, 00:15:49.052 "write": true, 00:15:49.052 "unmap": true, 00:15:49.052 "flush": true, 00:15:49.052 "reset": true, 00:15:49.052 "nvme_admin": true, 00:15:49.052 "nvme_io": true, 00:15:49.052 "nvme_io_md": false, 00:15:49.052 "write_zeroes": true, 00:15:49.052 "zcopy": false, 00:15:49.052 "get_zone_info": false, 00:15:49.052 "zone_management": false, 00:15:49.052 "zone_append": false, 00:15:49.052 "compare": true, 00:15:49.052 "compare_and_write": true, 00:15:49.052 "abort": true, 00:15:49.052 "seek_hole": false, 00:15:49.052 "seek_data": false, 00:15:49.052 "copy": true, 00:15:49.052 "nvme_iov_md": false 00:15:49.052 }, 00:15:49.052 "memory_domains": [ 00:15:49.052 { 00:15:49.052 "dma_device_id": "system", 00:15:49.052 "dma_device_type": 1 00:15:49.052 } 00:15:49.052 ], 00:15:49.052 "driver_specific": { 00:15:49.052 "nvme": [ 00:15:49.052 { 00:15:49.052 "trid": { 00:15:49.052 "trtype": "TCP", 00:15:49.052 "adrfam": "IPv4", 00:15:49.052 "traddr": "10.0.0.2", 00:15:49.052 "trsvcid": "4420", 00:15:49.052 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:49.052 }, 00:15:49.052 "ctrlr_data": { 00:15:49.052 "cntlid": 1, 00:15:49.052 "vendor_id": "0x8086", 00:15:49.052 "model_number": "SPDK bdev Controller", 00:15:49.052 "serial_number": "SPDK0", 00:15:49.052 "firmware_revision": "24.09", 00:15:49.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:49.052 "oacs": { 00:15:49.052 "security": 0, 00:15:49.052 "format": 0, 00:15:49.052 "firmware": 0, 00:15:49.052 "ns_manage": 0 00:15:49.052 }, 00:15:49.052 "multi_ctrlr": true, 00:15:49.052 "ana_reporting": false 00:15:49.052 }, 00:15:49.052 "vs": { 00:15:49.052 "nvme_version": "1.3" 00:15:49.052 }, 00:15:49.052 "ns_data": { 00:15:49.052 "id": 1, 00:15:49.052 "can_share": true 00:15:49.052 } 00:15:49.052 } 00:15:49.052 ], 00:15:49.052 "mp_policy": "active_passive" 00:15:49.052 } 00:15:49.052 } 00:15:49.052 ] 00:15:49.052 15:20:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=641935 00:15:49.052 15:20:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:49.052 15:20:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:49.052 Running I/O for 10 seconds... 00:15:49.992 Latency(us) 00:15:49.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:49.992 Nvme0n1 : 1.00 17981.00 70.24 0.00 0.00 0.00 0.00 0.00 00:15:49.992 =================================================================================================================== 00:15:49.992 Total : 17981.00 70.24 0.00 0.00 0.00 0.00 0.00 00:15:49.992 00:15:50.934 15:21:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u af837f51-df98-4531-b835-1f5f38828db5 00:15:51.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:51.195 Nvme0n1 : 2.00 18094.50 70.68 0.00 0.00 0.00 0.00 0.00 00:15:51.195 =================================================================================================================== 00:15:51.195 Total : 18094.50 70.68 0.00 0.00 0.00 0.00 0.00 00:15:51.195 00:15:51.195 true 00:15:51.195 15:21:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af837f51-df98-4531-b835-1f5f38828db5 00:15:51.195 15:21:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:51.464 15:21:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:51.464 15:21:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:51.464 15:21:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 641935 00:15:52.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:52.052 Nvme0n1 : 3.00 18130.00 70.82 0.00 0.00 0.00 0.00 0.00 00:15:52.052 =================================================================================================================== 00:15:52.052 Total : 18130.00 70.82 0.00 0.00 0.00 0.00 0.00 00:15:52.052 00:15:52.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:52.994 Nvme0n1 : 4.00 18171.75 70.98 0.00 0.00 0.00 0.00 0.00 00:15:52.994 =================================================================================================================== 00:15:52.994 Total : 18171.75 70.98 0.00 0.00 0.00 0.00 0.00 00:15:52.994 00:15:54.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:54.376 Nvme0n1 : 5.00 18209.20 71.13 0.00 0.00 0.00 0.00 0.00 00:15:54.376 =================================================================================================================== 00:15:54.376 Total : 18209.20 71.13 0.00 0.00 0.00 0.00 0.00 00:15:54.376 00:15:55.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:55.317 Nvme0n1 : 6.00 18234.50 71.23 0.00 0.00 0.00 0.00 0.00 00:15:55.317 =================================================================================================================== 00:15:55.317 Total : 18234.50 71.23 0.00 0.00 0.00 0.00 0.00 00:15:55.317 00:15:56.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:56.258 Nvme0n1 : 7.00 18253.29 71.30 0.00 0.00 0.00 0.00 0.00 00:15:56.258 =================================================================================================================== 00:15:56.258 Total : 18253.29 71.30 0.00 0.00 0.00 0.00 0.00 00:15:56.258 00:15:57.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:57.199 Nvme0n1 : 8.00 18266.88 71.35 0.00 0.00 0.00 0.00 0.00 00:15:57.199 =================================================================================================================== 00:15:57.199 Total : 18266.88 71.35 0.00 0.00 0.00 0.00 0.00 00:15:57.199 00:15:58.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:58.140 Nvme0n1 : 9.00 18270.00 71.37 0.00 0.00 0.00 0.00 0.00 00:15:58.140 =================================================================================================================== 00:15:58.140 Total : 18270.00 71.37 0.00 0.00 0.00 0.00 0.00 00:15:58.140 00:15:59.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:59.082 Nvme0n1 : 10.00 18284.90 71.43 0.00 0.00 0.00 0.00 0.00 00:15:59.083 =================================================================================================================== 00:15:59.083 Total : 18284.90 71.43 0.00 0.00 0.00 0.00 0.00 00:15:59.083 00:15:59.083 00:15:59.083 Latency(us) 00:15:59.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:59.083 Nvme0n1 : 10.00 18282.34 71.42 0.00 0.00 6997.07 4177.92 16274.77 00:15:59.083 =================================================================================================================== 00:15:59.083 Total : 18282.34 71.42 0.00 0.00 6997.07 4177.92 16274.77 00:15:59.083 0 00:15:59.083 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 641597 00:15:59.083 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 641597 ']' 00:15:59.083 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 641597 00:15:59.083 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:15:59.083 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:59.083 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 641597 00:15:59.083 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:59.083 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:59.083 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 641597' 00:15:59.083 killing process with pid 641597 00:15:59.083 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 641597 00:15:59.083 Received shutdown signal, test time was about 10.000000 seconds 00:15:59.083 00:15:59.083 Latency(us) 00:15:59.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.083 =================================================================================================================== 00:15:59.083 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:59.083 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 641597 00:15:59.343 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:59.604 15:21:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:59.604 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af837f51-df98-4531-b835-1f5f38828db5 00:15:59.604 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:59.866 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:59.866 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:59.866 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:59.866 [2024-07-15 15:21:09.451385] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af837f51-df98-4531-b835-1f5f38828db5 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af837f51-df98-4531-b835-1f5f38828db5 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af837f51-df98-4531-b835-1f5f38828db5 00:16:00.128 request: 00:16:00.128 { 00:16:00.128 "uuid": "af837f51-df98-4531-b835-1f5f38828db5", 00:16:00.128 "method": "bdev_lvol_get_lvstores", 00:16:00.128 "req_id": 1 00:16:00.128 } 00:16:00.128 Got JSON-RPC error response 00:16:00.128 response: 00:16:00.128 { 00:16:00.128 "code": -19, 00:16:00.128 "message": "No such device" 00:16:00.128 } 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:00.128 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:00.389 aio_bdev 00:16:00.389 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 06053c62-7437-4cd6-a210-1fe2126a5585 00:16:00.389 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=06053c62-7437-4cd6-a210-1fe2126a5585 00:16:00.389 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:00.389 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:16:00.389 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:00.389 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:00.389 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:00.389 15:21:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 06053c62-7437-4cd6-a210-1fe2126a5585 -t 2000 00:16:00.651 [ 00:16:00.651 { 00:16:00.651 "name": "06053c62-7437-4cd6-a210-1fe2126a5585", 00:16:00.651 "aliases": [ 00:16:00.651 "lvs/lvol" 00:16:00.651 ], 00:16:00.651 "product_name": "Logical Volume", 00:16:00.651 "block_size": 4096, 00:16:00.651 "num_blocks": 38912, 00:16:00.651 "uuid": "06053c62-7437-4cd6-a210-1fe2126a5585", 00:16:00.651 "assigned_rate_limits": { 00:16:00.651 "rw_ios_per_sec": 0, 00:16:00.651 "rw_mbytes_per_sec": 0, 00:16:00.651 "r_mbytes_per_sec": 0, 00:16:00.651 "w_mbytes_per_sec": 0 00:16:00.651 }, 00:16:00.651 "claimed": false, 00:16:00.651 "zoned": false, 00:16:00.651 "supported_io_types": { 00:16:00.651 "read": true, 00:16:00.651 "write": true, 00:16:00.651 "unmap": true, 00:16:00.651 "flush": false, 00:16:00.651 "reset": true, 00:16:00.651 "nvme_admin": false, 00:16:00.651 "nvme_io": false, 00:16:00.651 "nvme_io_md": false, 00:16:00.651 "write_zeroes": true, 00:16:00.651 "zcopy": false, 00:16:00.651 "get_zone_info": false, 00:16:00.651 "zone_management": false, 00:16:00.651 "zone_append": false, 00:16:00.651 "compare": false, 00:16:00.651 "compare_and_write": false, 00:16:00.651 "abort": false, 00:16:00.651 "seek_hole": true, 00:16:00.651 "seek_data": true, 00:16:00.651 "copy": false, 00:16:00.651 "nvme_iov_md": false 00:16:00.651 }, 00:16:00.651 "driver_specific": { 00:16:00.651 "lvol": { 00:16:00.651 "lvol_store_uuid": "af837f51-df98-4531-b835-1f5f38828db5", 00:16:00.651 "base_bdev": "aio_bdev", 00:16:00.651 "thin_provision": false, 00:16:00.651 "num_allocated_clusters": 38, 00:16:00.651 "snapshot": false, 00:16:00.651 "clone": false, 00:16:00.651 "esnap_clone": false 00:16:00.651 } 00:16:00.651 } 00:16:00.651 } 00:16:00.651 ] 00:16:00.651 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:16:00.651 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af837f51-df98-4531-b835-1f5f38828db5 00:16:00.651 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:00.651 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:00.651 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u af837f51-df98-4531-b835-1f5f38828db5 00:16:00.651 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:00.911 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:00.911 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 06053c62-7437-4cd6-a210-1fe2126a5585 00:16:01.172 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u af837f51-df98-4531-b835-1f5f38828db5 00:16:01.172 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:01.433 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:01.433 00:16:01.433 real 0m15.297s 00:16:01.433 user 0m15.042s 00:16:01.433 sys 0m1.212s 00:16:01.433 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:01.433 15:21:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:01.433 ************************************ 00:16:01.433 END TEST lvs_grow_clean 00:16:01.433 ************************************ 00:16:01.433 15:21:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:01.433 15:21:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:01.433 15:21:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:01.433 15:21:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.433 15:21:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:01.433 ************************************ 00:16:01.433 START TEST lvs_grow_dirty 00:16:01.433 ************************************ 00:16:01.433 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:16:01.433 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:01.433 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:01.433 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:01.433 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:01.433 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:01.433 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:01.433 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:01.433 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:01.433 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:01.692 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:01.692 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:01.953 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3719bbe7-ab5e-4841-9587-27ace8b4dfa1 00:16:01.953 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3719bbe7-ab5e-4841-9587-27ace8b4dfa1 00:16:01.953 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:01.953 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:01.953 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:01.953 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3719bbe7-ab5e-4841-9587-27ace8b4dfa1 lvol 150 00:16:02.213 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6f8d08ae-0727-4c9e-a970-409cf75325a7 00:16:02.213 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:02.213 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:02.213 [2024-07-15 15:21:11.813007] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:02.213 [2024-07-15 15:21:11.813061] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:02.213 true 00:16:02.214 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3719bbe7-ab5e-4841-9587-27ace8b4dfa1 00:16:02.214 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:02.474 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:02.474 15:21:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:02.734 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6f8d08ae-0727-4c9e-a970-409cf75325a7 00:16:02.734 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:03.030 [2024-07-15 15:21:12.430901] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.030 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:03.030 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=645247 00:16:03.030 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:03.030 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:03.030 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 645247 /var/tmp/bdevperf.sock 00:16:03.030 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 645247 ']' 00:16:03.030 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:03.030 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:03.030 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:03.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:03.030 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:03.030 15:21:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:03.295 [2024-07-15 15:21:12.654979] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:03.295 [2024-07-15 15:21:12.655033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid645247 ] 00:16:03.295 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.295 [2024-07-15 15:21:12.716734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.295 [2024-07-15 15:21:12.781331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.866 15:21:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:03.867 15:21:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:03.867 15:21:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:04.127 Nvme0n1 00:16:04.127 15:21:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:04.387 [ 00:16:04.387 { 00:16:04.387 "name": "Nvme0n1", 00:16:04.388 "aliases": [ 00:16:04.388 "6f8d08ae-0727-4c9e-a970-409cf75325a7" 00:16:04.388 ], 00:16:04.388 "product_name": "NVMe disk", 00:16:04.388 "block_size": 4096, 00:16:04.388 "num_blocks": 38912, 00:16:04.388 "uuid": "6f8d08ae-0727-4c9e-a970-409cf75325a7", 00:16:04.388 "assigned_rate_limits": { 00:16:04.388 "rw_ios_per_sec": 0, 00:16:04.388 "rw_mbytes_per_sec": 0, 00:16:04.388 "r_mbytes_per_sec": 0, 00:16:04.388 "w_mbytes_per_sec": 0 00:16:04.388 }, 00:16:04.388 "claimed": false, 00:16:04.388 "zoned": false, 00:16:04.388 "supported_io_types": { 00:16:04.388 "read": true, 00:16:04.388 "write": true, 00:16:04.388 "unmap": true, 00:16:04.388 "flush": true, 00:16:04.388 "reset": true, 00:16:04.388 "nvme_admin": true, 00:16:04.388 "nvme_io": true, 00:16:04.388 "nvme_io_md": false, 00:16:04.388 "write_zeroes": true, 00:16:04.388 "zcopy": false, 00:16:04.388 "get_zone_info": false, 00:16:04.388 "zone_management": false, 00:16:04.388 "zone_append": false, 00:16:04.388 "compare": true, 00:16:04.388 "compare_and_write": true, 00:16:04.388 "abort": true, 00:16:04.388 "seek_hole": false, 00:16:04.388 "seek_data": false, 00:16:04.388 "copy": true, 00:16:04.388 "nvme_iov_md": false 00:16:04.388 }, 00:16:04.388 "memory_domains": [ 00:16:04.388 { 00:16:04.388 "dma_device_id": "system", 00:16:04.388 "dma_device_type": 1 00:16:04.388 } 00:16:04.388 ], 00:16:04.388 "driver_specific": { 00:16:04.388 "nvme": [ 00:16:04.388 { 00:16:04.388 "trid": { 00:16:04.388 "trtype": "TCP", 00:16:04.388 "adrfam": "IPv4", 00:16:04.388 "traddr": "10.0.0.2", 00:16:04.388 "trsvcid": "4420", 00:16:04.388 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:04.388 }, 00:16:04.388 "ctrlr_data": { 00:16:04.388 "cntlid": 1, 00:16:04.388 "vendor_id": "0x8086", 00:16:04.388 "model_number": "SPDK bdev Controller", 00:16:04.388 "serial_number": "SPDK0", 00:16:04.388 "firmware_revision": "24.09", 00:16:04.388 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:04.388 "oacs": { 00:16:04.388 "security": 0, 00:16:04.388 "format": 0, 00:16:04.388 "firmware": 0, 00:16:04.388 "ns_manage": 0 00:16:04.388 }, 00:16:04.388 "multi_ctrlr": true, 00:16:04.388 "ana_reporting": false 00:16:04.388 }, 00:16:04.388 "vs": { 00:16:04.388 "nvme_version": "1.3" 00:16:04.388 }, 00:16:04.388 "ns_data": { 00:16:04.388 "id": 1, 00:16:04.388 "can_share": true 00:16:04.388 } 00:16:04.388 } 00:16:04.388 ], 00:16:04.388 "mp_policy": "active_passive" 00:16:04.388 } 00:16:04.388 } 00:16:04.388 ] 00:16:04.388 15:21:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=645582 00:16:04.388 15:21:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:04.388 15:21:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:04.388 Running I/O for 10 seconds... 00:16:05.772 Latency(us) 00:16:05.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:05.772 Nvme0n1 : 1.00 18069.00 70.58 0.00 0.00 0.00 0.00 0.00 00:16:05.772 =================================================================================================================== 00:16:05.772 Total : 18069.00 70.58 0.00 0.00 0.00 0.00 0.00 00:16:05.772 00:16:06.343 15:21:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3719bbe7-ab5e-4841-9587-27ace8b4dfa1 00:16:06.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:06.604 Nvme0n1 : 2.00 18152.00 70.91 0.00 0.00 0.00 0.00 0.00 00:16:06.604 =================================================================================================================== 00:16:06.604 Total : 18152.00 70.91 0.00 0.00 0.00 0.00 0.00 00:16:06.604 00:16:06.604 true 00:16:06.604 15:21:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3719bbe7-ab5e-4841-9587-27ace8b4dfa1 00:16:06.604 15:21:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:06.863 15:21:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:06.863 15:21:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:06.863 15:21:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 645582 00:16:07.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:07.434 Nvme0n1 : 3.00 18202.33 71.10 0.00 0.00 0.00 0.00 0.00 00:16:07.434 =================================================================================================================== 00:16:07.434 Total : 18202.33 71.10 0.00 0.00 0.00 0.00 0.00 00:16:07.434 00:16:08.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:08.376 Nvme0n1 : 4.00 18253.50 71.30 0.00 0.00 0.00 0.00 0.00 00:16:08.376 =================================================================================================================== 00:16:08.376 Total : 18253.50 71.30 0.00 0.00 0.00 0.00 0.00 00:16:08.376 00:16:09.780 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:09.780 Nvme0n1 : 5.00 18265.80 71.35 0.00 0.00 0.00 0.00 0.00 00:16:09.780 =================================================================================================================== 00:16:09.780 Total : 18265.80 71.35 0.00 0.00 0.00 0.00 0.00 00:16:09.780 00:16:10.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:10.721 Nvme0n1 : 6.00 18291.17 71.45 0.00 0.00 0.00 0.00 0.00 00:16:10.721 =================================================================================================================== 00:16:10.721 Total : 18291.17 71.45 0.00 0.00 0.00 0.00 0.00 00:16:10.721 00:16:11.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:11.795 Nvme0n1 : 7.00 18301.57 71.49 0.00 0.00 0.00 0.00 0.00 00:16:11.795 =================================================================================================================== 00:16:11.795 Total : 18301.57 71.49 0.00 0.00 0.00 0.00 0.00 00:16:11.795 00:16:12.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:12.375 Nvme0n1 : 8.00 18316.38 71.55 0.00 0.00 0.00 0.00 0.00 00:16:12.375 =================================================================================================================== 00:16:12.375 Total : 18316.38 71.55 0.00 0.00 0.00 0.00 0.00 00:16:12.375 00:16:13.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:13.759 Nvme0n1 : 9.00 18336.11 71.63 0.00 0.00 0.00 0.00 0.00 00:16:13.759 =================================================================================================================== 00:16:13.759 Total : 18336.11 71.63 0.00 0.00 0.00 0.00 0.00 00:16:13.759 00:16:14.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:14.701 Nvme0n1 : 10.00 18340.90 71.64 0.00 0.00 0.00 0.00 0.00 00:16:14.701 =================================================================================================================== 00:16:14.701 Total : 18340.90 71.64 0.00 0.00 0.00 0.00 0.00 00:16:14.701 00:16:14.701 00:16:14.701 Latency(us) 00:16:14.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:14.701 Nvme0n1 : 10.00 18345.64 71.66 0.00 0.00 6973.46 4177.92 14090.24 00:16:14.701 =================================================================================================================== 00:16:14.701 Total : 18345.64 71.66 0.00 0.00 6973.46 4177.92 14090.24 00:16:14.701 0 00:16:14.701 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 645247 00:16:14.701 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 645247 ']' 00:16:14.701 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 645247 00:16:14.701 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:16:14.701 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.701 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 645247 00:16:14.701 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:14.701 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:14.701 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 645247' 00:16:14.701 killing process with pid 645247 00:16:14.701 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 645247 00:16:14.701 Received shutdown signal, test time was about 10.000000 seconds 00:16:14.701 00:16:14.701 Latency(us) 00:16:14.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.701 =================================================================================================================== 00:16:14.701 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:14.701 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 645247 00:16:14.701 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:14.961 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:14.961 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3719bbe7-ab5e-4841-9587-27ace8b4dfa1 00:16:14.961 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 641163 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 641163 00:16:15.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 641163 Killed "${NVMF_APP[@]}" "$@" 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=647608 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 647608 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 647608 ']' 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.221 15:21:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:15.221 [2024-07-15 15:21:24.780328] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:15.221 [2024-07-15 15:21:24.780383] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.221 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.482 [2024-07-15 15:21:24.852135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.482 [2024-07-15 15:21:24.917319] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.482 [2024-07-15 15:21:24.917351] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.482 [2024-07-15 15:21:24.917358] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.482 [2024-07-15 15:21:24.917364] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.482 [2024-07-15 15:21:24.917370] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.482 [2024-07-15 15:21:24.917387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.052 15:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:16.052 15:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:16.052 15:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:16.052 15:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:16.052 15:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:16.052 15:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.052 15:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:16.313 [2024-07-15 15:21:25.718609] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:16.313 [2024-07-15 15:21:25.718699] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:16.313 [2024-07-15 15:21:25.718729] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:16.313 15:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:16.313 15:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6f8d08ae-0727-4c9e-a970-409cf75325a7 00:16:16.313 15:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=6f8d08ae-0727-4c9e-a970-409cf75325a7 00:16:16.313 15:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:16.313 15:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:16.313 15:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:16.313 15:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:16.313 15:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:16.313 15:21:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6f8d08ae-0727-4c9e-a970-409cf75325a7 -t 2000 00:16:16.573 [ 00:16:16.573 { 00:16:16.573 "name": "6f8d08ae-0727-4c9e-a970-409cf75325a7", 00:16:16.573 "aliases": [ 00:16:16.573 "lvs/lvol" 00:16:16.573 ], 00:16:16.573 "product_name": "Logical Volume", 00:16:16.573 "block_size": 4096, 00:16:16.573 "num_blocks": 38912, 00:16:16.573 "uuid": "6f8d08ae-0727-4c9e-a970-409cf75325a7", 00:16:16.573 "assigned_rate_limits": { 00:16:16.573 "rw_ios_per_sec": 0, 00:16:16.573 "rw_mbytes_per_sec": 0, 00:16:16.573 "r_mbytes_per_sec": 0, 00:16:16.573 "w_mbytes_per_sec": 0 00:16:16.573 }, 00:16:16.573 "claimed": false, 00:16:16.573 "zoned": false, 00:16:16.573 "supported_io_types": { 00:16:16.573 "read": true, 00:16:16.573 "write": true, 00:16:16.573 "unmap": true, 00:16:16.573 "flush": false, 00:16:16.573 "reset": true, 00:16:16.573 "nvme_admin": false, 00:16:16.573 "nvme_io": false, 00:16:16.573 "nvme_io_md": false, 00:16:16.573 "write_zeroes": true, 00:16:16.573 "zcopy": false, 00:16:16.573 "get_zone_info": false, 00:16:16.573 "zone_management": false, 00:16:16.573 "zone_append": false, 00:16:16.573 "compare": false, 00:16:16.573 "compare_and_write": false, 00:16:16.573 "abort": false, 00:16:16.573 "seek_hole": true, 00:16:16.573 "seek_data": true, 00:16:16.573 "copy": false, 00:16:16.573 "nvme_iov_md": false 00:16:16.573 }, 00:16:16.573 "driver_specific": { 00:16:16.573 "lvol": { 00:16:16.573 "lvol_store_uuid": "3719bbe7-ab5e-4841-9587-27ace8b4dfa1", 00:16:16.573 "base_bdev": "aio_bdev", 00:16:16.573 "thin_provision": false, 00:16:16.573 "num_allocated_clusters": 38, 00:16:16.573 "snapshot": false, 00:16:16.573 "clone": false, 00:16:16.573 "esnap_clone": false 00:16:16.573 } 00:16:16.573 } 00:16:16.573 } 00:16:16.573 ] 00:16:16.573 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:16.573 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3719bbe7-ab5e-4841-9587-27ace8b4dfa1 00:16:16.573 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:16.573 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:16.573 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3719bbe7-ab5e-4841-9587-27ace8b4dfa1 00:16:16.573 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:16.832 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:16.832 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:17.092 [2024-07-15 15:21:26.478505] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3719bbe7-ab5e-4841-9587-27ace8b4dfa1 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3719bbe7-ab5e-4841-9587-27ace8b4dfa1 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3719bbe7-ab5e-4841-9587-27ace8b4dfa1 00:16:17.092 request: 00:16:17.092 { 00:16:17.092 "uuid": "3719bbe7-ab5e-4841-9587-27ace8b4dfa1", 00:16:17.092 "method": "bdev_lvol_get_lvstores", 00:16:17.092 "req_id": 1 00:16:17.092 } 00:16:17.092 Got JSON-RPC error response 00:16:17.092 response: 00:16:17.092 { 00:16:17.092 "code": -19, 00:16:17.092 "message": "No such device" 00:16:17.092 } 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:17.092 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:17.352 aio_bdev 00:16:17.352 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6f8d08ae-0727-4c9e-a970-409cf75325a7 00:16:17.352 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=6f8d08ae-0727-4c9e-a970-409cf75325a7 00:16:17.352 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:17.352 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:17.352 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:17.352 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:17.352 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:17.352 15:21:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6f8d08ae-0727-4c9e-a970-409cf75325a7 -t 2000 00:16:17.612 [ 00:16:17.612 { 00:16:17.612 "name": "6f8d08ae-0727-4c9e-a970-409cf75325a7", 00:16:17.612 "aliases": [ 00:16:17.612 "lvs/lvol" 00:16:17.612 ], 00:16:17.612 "product_name": "Logical Volume", 00:16:17.612 "block_size": 4096, 00:16:17.612 "num_blocks": 38912, 00:16:17.612 "uuid": "6f8d08ae-0727-4c9e-a970-409cf75325a7", 00:16:17.612 "assigned_rate_limits": { 00:16:17.612 "rw_ios_per_sec": 0, 00:16:17.612 "rw_mbytes_per_sec": 0, 00:16:17.612 "r_mbytes_per_sec": 0, 00:16:17.612 "w_mbytes_per_sec": 0 00:16:17.612 }, 00:16:17.612 "claimed": false, 00:16:17.612 "zoned": false, 00:16:17.612 "supported_io_types": { 00:16:17.612 "read": true, 00:16:17.612 "write": true, 00:16:17.612 "unmap": true, 00:16:17.612 "flush": false, 00:16:17.612 "reset": true, 00:16:17.612 "nvme_admin": false, 00:16:17.612 "nvme_io": false, 00:16:17.612 "nvme_io_md": false, 00:16:17.612 "write_zeroes": true, 00:16:17.612 "zcopy": false, 00:16:17.612 "get_zone_info": false, 00:16:17.612 "zone_management": false, 00:16:17.612 "zone_append": false, 00:16:17.612 "compare": false, 00:16:17.612 "compare_and_write": false, 00:16:17.612 "abort": false, 00:16:17.612 "seek_hole": true, 00:16:17.612 "seek_data": true, 00:16:17.612 "copy": false, 00:16:17.612 "nvme_iov_md": false 00:16:17.612 }, 00:16:17.612 "driver_specific": { 00:16:17.612 "lvol": { 00:16:17.612 "lvol_store_uuid": "3719bbe7-ab5e-4841-9587-27ace8b4dfa1", 00:16:17.612 "base_bdev": "aio_bdev", 00:16:17.612 "thin_provision": false, 00:16:17.612 "num_allocated_clusters": 38, 00:16:17.612 "snapshot": false, 00:16:17.612 "clone": false, 00:16:17.612 "esnap_clone": false 00:16:17.612 } 00:16:17.612 } 00:16:17.612 } 00:16:17.612 ] 00:16:17.612 15:21:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:17.613 15:21:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3719bbe7-ab5e-4841-9587-27ace8b4dfa1 00:16:17.613 15:21:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:17.873 15:21:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:17.873 15:21:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3719bbe7-ab5e-4841-9587-27ace8b4dfa1 00:16:17.873 15:21:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:17.873 15:21:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:17.873 15:21:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6f8d08ae-0727-4c9e-a970-409cf75325a7 00:16:18.135 15:21:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3719bbe7-ab5e-4841-9587-27ace8b4dfa1 00:16:18.135 15:21:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:18.394 15:21:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:18.394 00:16:18.394 real 0m16.901s 00:16:18.394 user 0m44.378s 00:16:18.394 sys 0m2.828s 00:16:18.394 15:21:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:18.394 15:21:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:18.394 ************************************ 00:16:18.394 END TEST lvs_grow_dirty 00:16:18.394 ************************************ 00:16:18.394 15:21:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:18.395 15:21:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:18.395 15:21:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:16:18.395 15:21:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:16:18.395 15:21:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:18.395 15:21:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:18.395 15:21:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:18.395 15:21:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:18.395 15:21:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:18.395 15:21:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:18.395 nvmf_trace.0 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:18.655 rmmod nvme_tcp 00:16:18.655 rmmod nvme_fabrics 00:16:18.655 rmmod nvme_keyring 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 647608 ']' 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 647608 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 647608 ']' 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 647608 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 647608 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 647608' 00:16:18.655 killing process with pid 647608 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 647608 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 647608 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.655 15:21:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.199 15:21:30 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:21.199 00:16:21.199 real 0m43.762s 00:16:21.199 user 1m5.527s 00:16:21.199 sys 0m10.253s 00:16:21.199 15:21:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:21.199 15:21:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:21.199 ************************************ 00:16:21.199 END TEST nvmf_lvs_grow 00:16:21.199 ************************************ 00:16:21.199 15:21:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:21.199 15:21:30 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:21.199 15:21:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:21.199 15:21:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.199 15:21:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:21.199 ************************************ 00:16:21.199 START TEST nvmf_bdev_io_wait 00:16:21.199 ************************************ 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:21.199 * Looking for test storage... 00:16:21.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:16:21.199 15:21:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:29.336 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:29.336 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:29.336 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:29.337 Found net devices under 0000:31:00.0: cvl_0_0 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:29.337 Found net devices under 0000:31:00.1: cvl_0_1 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:29.337 15:21:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:29.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:16:29.337 00:16:29.337 --- 10.0.0.2 ping statistics --- 00:16:29.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.337 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:29.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:29.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:16:29.337 00:16:29.337 --- 10.0.0.1 ping statistics --- 00:16:29.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.337 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=652884 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 652884 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 652884 ']' 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:29.337 [2024-07-15 15:21:38.147890] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:29.337 [2024-07-15 15:21:38.147937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.337 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.337 [2024-07-15 15:21:38.219268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:29.337 [2024-07-15 15:21:38.285773] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.337 [2024-07-15 15:21:38.285810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.337 [2024-07-15 15:21:38.285818] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.337 [2024-07-15 15:21:38.285824] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.337 [2024-07-15 15:21:38.285830] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.337 [2024-07-15 15:21:38.285939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.337 [2024-07-15 15:21:38.286054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.337 [2024-07-15 15:21:38.286209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.337 [2024-07-15 15:21:38.286210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.337 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:29.597 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.597 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:29.597 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.597 15:21:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:29.597 [2024-07-15 15:21:39.014402] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:29.597 Malloc0 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:29.597 [2024-07-15 15:21:39.086225] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=653013 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=653016 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:29.597 { 00:16:29.597 "params": { 00:16:29.597 "name": "Nvme$subsystem", 00:16:29.597 "trtype": "$TEST_TRANSPORT", 00:16:29.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.597 "adrfam": "ipv4", 00:16:29.597 "trsvcid": "$NVMF_PORT", 00:16:29.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.597 "hdgst": ${hdgst:-false}, 00:16:29.597 "ddgst": ${ddgst:-false} 00:16:29.597 }, 00:16:29.597 "method": "bdev_nvme_attach_controller" 00:16:29.597 } 00:16:29.597 EOF 00:16:29.597 )") 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=653019 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=653022 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:29.597 { 00:16:29.597 "params": { 00:16:29.597 "name": "Nvme$subsystem", 00:16:29.597 "trtype": "$TEST_TRANSPORT", 00:16:29.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.597 "adrfam": "ipv4", 00:16:29.597 "trsvcid": "$NVMF_PORT", 00:16:29.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.597 "hdgst": ${hdgst:-false}, 00:16:29.597 "ddgst": ${ddgst:-false} 00:16:29.597 }, 00:16:29.597 "method": "bdev_nvme_attach_controller" 00:16:29.597 } 00:16:29.597 EOF 00:16:29.597 )") 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:29.597 { 00:16:29.597 "params": { 00:16:29.597 "name": "Nvme$subsystem", 00:16:29.597 "trtype": "$TEST_TRANSPORT", 00:16:29.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.597 "adrfam": "ipv4", 00:16:29.597 "trsvcid": "$NVMF_PORT", 00:16:29.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.597 "hdgst": ${hdgst:-false}, 00:16:29.597 "ddgst": ${ddgst:-false} 00:16:29.597 }, 00:16:29.597 "method": "bdev_nvme_attach_controller" 00:16:29.597 } 00:16:29.597 EOF 00:16:29.597 )") 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:29.597 { 00:16:29.597 "params": { 00:16:29.597 "name": "Nvme$subsystem", 00:16:29.597 "trtype": "$TEST_TRANSPORT", 00:16:29.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.597 "adrfam": "ipv4", 00:16:29.597 "trsvcid": "$NVMF_PORT", 00:16:29.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.597 "hdgst": ${hdgst:-false}, 00:16:29.597 "ddgst": ${ddgst:-false} 00:16:29.597 }, 00:16:29.597 "method": "bdev_nvme_attach_controller" 00:16:29.597 } 00:16:29.597 EOF 00:16:29.597 )") 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:29.597 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 653013 00:16:29.598 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:29.598 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:29.598 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:29.598 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:29.598 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:29.598 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:29.598 "params": { 00:16:29.598 "name": "Nvme1", 00:16:29.598 "trtype": "tcp", 00:16:29.598 "traddr": "10.0.0.2", 00:16:29.598 "adrfam": "ipv4", 00:16:29.598 "trsvcid": "4420", 00:16:29.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:29.598 "hdgst": false, 00:16:29.598 "ddgst": false 00:16:29.598 }, 00:16:29.598 "method": "bdev_nvme_attach_controller" 00:16:29.598 }' 00:16:29.598 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:29.598 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:29.598 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:29.598 "params": { 00:16:29.598 "name": "Nvme1", 00:16:29.598 "trtype": "tcp", 00:16:29.598 "traddr": "10.0.0.2", 00:16:29.598 "adrfam": "ipv4", 00:16:29.598 "trsvcid": "4420", 00:16:29.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:29.598 "hdgst": false, 00:16:29.598 "ddgst": false 00:16:29.598 }, 00:16:29.598 "method": "bdev_nvme_attach_controller" 00:16:29.598 }' 00:16:29.598 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:29.598 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:29.598 "params": { 00:16:29.598 "name": "Nvme1", 00:16:29.598 "trtype": "tcp", 00:16:29.598 "traddr": "10.0.0.2", 00:16:29.598 "adrfam": "ipv4", 00:16:29.598 "trsvcid": "4420", 00:16:29.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:29.598 "hdgst": false, 00:16:29.598 "ddgst": false 00:16:29.598 }, 00:16:29.598 "method": "bdev_nvme_attach_controller" 00:16:29.598 }' 00:16:29.598 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:29.598 15:21:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:29.598 "params": { 00:16:29.598 "name": "Nvme1", 00:16:29.598 "trtype": "tcp", 00:16:29.598 "traddr": "10.0.0.2", 00:16:29.598 "adrfam": "ipv4", 00:16:29.598 "trsvcid": "4420", 00:16:29.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:29.598 "hdgst": false, 00:16:29.598 "ddgst": false 00:16:29.598 }, 00:16:29.598 "method": "bdev_nvme_attach_controller" 00:16:29.598 }' 00:16:29.598 [2024-07-15 15:21:39.139301] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:29.598 [2024-07-15 15:21:39.139350] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:29.598 [2024-07-15 15:21:39.140038] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:29.598 [2024-07-15 15:21:39.140092] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:29.598 [2024-07-15 15:21:39.141538] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:29.598 [2024-07-15 15:21:39.141583] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:29.598 [2024-07-15 15:21:39.144834] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:29.598 [2024-07-15 15:21:39.144878] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:29.598 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.859 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.859 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.859 [2024-07-15 15:21:39.284297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.859 [2024-07-15 15:21:39.321390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.859 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.859 [2024-07-15 15:21:39.335538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:29.859 [2024-07-15 15:21:39.366918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.859 [2024-07-15 15:21:39.371505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:29.859 [2024-07-15 15:21:39.416921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:16:29.859 [2024-07-15 15:21:39.428958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.120 [2024-07-15 15:21:39.480751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:30.120 Running I/O for 1 seconds... 00:16:30.120 Running I/O for 1 seconds... 00:16:30.120 Running I/O for 1 seconds... 00:16:30.120 Running I/O for 1 seconds... 00:16:31.072 00:16:31.072 Latency(us) 00:16:31.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.072 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:31.072 Nvme1n1 : 1.01 8731.26 34.11 0.00 0.00 14537.70 7318.19 23811.41 00:16:31.072 =================================================================================================================== 00:16:31.072 Total : 8731.26 34.11 0.00 0.00 14537.70 7318.19 23811.41 00:16:31.072 00:16:31.072 Latency(us) 00:16:31.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.072 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:31.072 Nvme1n1 : 1.00 8733.46 34.12 0.00 0.00 14621.67 4314.45 32331.09 00:16:31.072 =================================================================================================================== 00:16:31.072 Total : 8733.46 34.12 0.00 0.00 14621.67 4314.45 32331.09 00:16:31.072 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 653016 00:16:31.072 00:16:31.072 Latency(us) 00:16:31.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.072 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:31.072 Nvme1n1 : 1.00 18676.11 72.95 0.00 0.00 6834.36 4478.29 18240.85 00:16:31.072 =================================================================================================================== 00:16:31.072 Total : 18676.11 72.95 0.00 0.00 6834.36 4478.29 18240.85 00:16:31.333 00:16:31.333 Latency(us) 00:16:31.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.333 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:31.333 Nvme1n1 : 1.00 188219.58 735.23 0.00 0.00 677.25 271.36 1126.40 00:16:31.333 =================================================================================================================== 00:16:31.333 Total : 188219.58 735.23 0.00 0.00 677.25 271.36 1126.40 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 653019 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 653022 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:31.333 rmmod nvme_tcp 00:16:31.333 rmmod nvme_fabrics 00:16:31.333 rmmod nvme_keyring 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 652884 ']' 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 652884 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 652884 ']' 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 652884 00:16:31.333 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:16:31.593 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:31.593 15:21:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 652884 00:16:31.593 15:21:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:31.593 15:21:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:31.593 15:21:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 652884' 00:16:31.593 killing process with pid 652884 00:16:31.593 15:21:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 652884 00:16:31.593 15:21:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 652884 00:16:31.593 15:21:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:31.593 15:21:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:31.593 15:21:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:31.593 15:21:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:31.593 15:21:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:31.593 15:21:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.593 15:21:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.593 15:21:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.133 15:21:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:34.133 00:16:34.133 real 0m12.793s 00:16:34.133 user 0m18.884s 00:16:34.133 sys 0m6.890s 00:16:34.133 15:21:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:34.134 15:21:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:34.134 ************************************ 00:16:34.134 END TEST nvmf_bdev_io_wait 00:16:34.134 ************************************ 00:16:34.134 15:21:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:34.134 15:21:43 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:34.134 15:21:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:34.134 15:21:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:34.134 15:21:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:34.134 ************************************ 00:16:34.134 START TEST nvmf_queue_depth 00:16:34.134 ************************************ 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:34.134 * Looking for test storage... 00:16:34.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:16:34.134 15:21:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:42.289 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:42.290 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:42.290 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:42.290 Found net devices under 0000:31:00.0: cvl_0_0 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:42.290 Found net devices under 0000:31:00.1: cvl_0_1 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:42.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:16:42.290 00:16:42.290 --- 10.0.0.2 ping statistics --- 00:16:42.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.290 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:42.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:16:42.290 00:16:42.290 --- 10.0.0.1 ping statistics --- 00:16:42.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.290 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=657813 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 657813 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 657813 ']' 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:42.290 15:21:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:42.290 [2024-07-15 15:21:50.952432] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:42.290 [2024-07-15 15:21:50.952483] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.290 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.290 [2024-07-15 15:21:51.026373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.290 [2024-07-15 15:21:51.093813] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.290 [2024-07-15 15:21:51.093848] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.290 [2024-07-15 15:21:51.093855] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.290 [2024-07-15 15:21:51.093861] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.290 [2024-07-15 15:21:51.093867] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.290 [2024-07-15 15:21:51.093897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.290 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.290 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:42.290 15:21:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:42.290 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:42.290 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:42.290 15:21:51 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.290 15:21:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:42.290 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.290 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:42.290 [2024-07-15 15:21:51.760689] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.290 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.290 15:21:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:42.290 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.290 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:42.290 Malloc0 00:16:42.290 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.290 15:21:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:42.291 [2024-07-15 15:21:51.825865] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=658160 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 658160 /var/tmp/bdevperf.sock 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 658160 ']' 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:42.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:42.291 15:21:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:42.291 [2024-07-15 15:21:51.887601] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:16:42.291 [2024-07-15 15:21:51.887661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658160 ] 00:16:42.551 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.551 [2024-07-15 15:21:51.956217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.551 [2024-07-15 15:21:52.021343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.121 15:21:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:43.121 15:21:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:43.121 15:21:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:43.121 15:21:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.121 15:21:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:43.380 NVMe0n1 00:16:43.380 15:21:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.380 15:21:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:43.380 Running I/O for 10 seconds... 00:16:53.372 00:16:53.372 Latency(us) 00:16:53.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.372 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:53.372 Verification LBA range: start 0x0 length 0x4000 00:16:53.372 NVMe0n1 : 10.08 9440.89 36.88 0.00 0.00 108044.68 24794.45 78643.20 00:16:53.372 =================================================================================================================== 00:16:53.372 Total : 9440.89 36.88 0.00 0.00 108044.68 24794.45 78643.20 00:16:53.372 0 00:16:53.372 15:22:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 658160 00:16:53.372 15:22:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 658160 ']' 00:16:53.372 15:22:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 658160 00:16:53.372 15:22:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:53.372 15:22:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:53.372 15:22:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 658160 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 658160' 00:16:53.632 killing process with pid 658160 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 658160 00:16:53.632 Received shutdown signal, test time was about 10.000000 seconds 00:16:53.632 00:16:53.632 Latency(us) 00:16:53.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.632 =================================================================================================================== 00:16:53.632 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 658160 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:53.632 rmmod nvme_tcp 00:16:53.632 rmmod nvme_fabrics 00:16:53.632 rmmod nvme_keyring 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 657813 ']' 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 657813 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 657813 ']' 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 657813 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:53.632 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 657813 00:16:53.892 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:53.892 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:53.892 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 657813' 00:16:53.892 killing process with pid 657813 00:16:53.892 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 657813 00:16:53.892 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 657813 00:16:53.892 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:53.892 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:53.892 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:53.892 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.892 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:53.892 15:22:03 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.892 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.892 15:22:03 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.454 15:22:05 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.454 00:16:56.454 real 0m22.199s 00:16:56.454 user 0m25.651s 00:16:56.454 sys 0m6.546s 00:16:56.454 15:22:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:56.454 15:22:05 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:56.454 ************************************ 00:16:56.454 END TEST nvmf_queue_depth 00:16:56.454 ************************************ 00:16:56.454 15:22:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:56.454 15:22:05 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:56.454 15:22:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:56.454 15:22:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:56.454 15:22:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:56.454 ************************************ 00:16:56.455 START TEST nvmf_target_multipath 00:16:56.455 ************************************ 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:56.455 * Looking for test storage... 00:16:56.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:56.455 15:22:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:04.656 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:04.656 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:04.656 Found net devices under 0000:31:00.0: cvl_0_0 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:04.656 Found net devices under 0000:31:00.1: cvl_0_1 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:04.656 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:04.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:17:04.657 00:17:04.657 --- 10.0.0.2 ping statistics --- 00:17:04.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.657 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:04.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:17:04.657 00:17:04.657 --- 10.0.0.1 ping statistics --- 00:17:04.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.657 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:04.657 only one NIC for nvmf test 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:04.657 rmmod nvme_tcp 00:17:04.657 rmmod nvme_fabrics 00:17:04.657 rmmod nvme_keyring 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.657 15:22:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:06.038 00:17:06.038 real 0m10.057s 00:17:06.038 user 0m2.222s 00:17:06.038 sys 0m5.709s 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:06.038 15:22:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:06.038 ************************************ 00:17:06.039 END TEST nvmf_target_multipath 00:17:06.039 ************************************ 00:17:06.299 15:22:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:06.299 15:22:15 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:06.299 15:22:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:06.299 15:22:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.299 15:22:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:06.299 ************************************ 00:17:06.299 START TEST nvmf_zcopy 00:17:06.299 ************************************ 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:06.299 * Looking for test storage... 00:17:06.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.299 15:22:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:06.300 15:22:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:14.439 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:14.439 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:14.440 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:14.440 Found net devices under 0000:31:00.0: cvl_0_0 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:14.440 Found net devices under 0000:31:00.1: cvl_0_1 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:14.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:17:14.440 00:17:14.440 --- 10.0.0.2 ping statistics --- 00:17:14.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.440 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:17:14.440 00:17:14.440 --- 10.0.0.1 ping statistics --- 00:17:14.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.440 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=669270 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 669270 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 669270 ']' 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.440 15:22:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:14.440 [2024-07-15 15:22:23.663216] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:17:14.440 [2024-07-15 15:22:23.663281] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.440 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.440 [2024-07-15 15:22:23.740213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.440 [2024-07-15 15:22:23.812929] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.440 [2024-07-15 15:22:23.812967] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.440 [2024-07-15 15:22:23.812975] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.440 [2024-07-15 15:22:23.812981] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.440 [2024-07-15 15:22:23.812987] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.440 [2024-07-15 15:22:23.813007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:15.010 [2024-07-15 15:22:24.471638] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:15.010 [2024-07-15 15:22:24.495796] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.010 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:15.010 malloc0 00:17:15.011 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.011 15:22:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:15.011 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.011 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:15.011 15:22:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.011 15:22:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:15.011 15:22:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:15.011 15:22:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:15.011 15:22:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:15.011 15:22:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:15.011 15:22:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:15.011 { 00:17:15.011 "params": { 00:17:15.011 "name": "Nvme$subsystem", 00:17:15.011 "trtype": "$TEST_TRANSPORT", 00:17:15.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:15.011 "adrfam": "ipv4", 00:17:15.011 "trsvcid": "$NVMF_PORT", 00:17:15.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:15.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:15.011 "hdgst": ${hdgst:-false}, 00:17:15.011 "ddgst": ${ddgst:-false} 00:17:15.011 }, 00:17:15.011 "method": "bdev_nvme_attach_controller" 00:17:15.011 } 00:17:15.011 EOF 00:17:15.011 )") 00:17:15.011 15:22:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:15.011 15:22:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:15.011 15:22:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:15.011 15:22:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:15.011 "params": { 00:17:15.011 "name": "Nvme1", 00:17:15.011 "trtype": "tcp", 00:17:15.011 "traddr": "10.0.0.2", 00:17:15.011 "adrfam": "ipv4", 00:17:15.011 "trsvcid": "4420", 00:17:15.011 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:15.011 "hdgst": false, 00:17:15.011 "ddgst": false 00:17:15.011 }, 00:17:15.011 "method": "bdev_nvme_attach_controller" 00:17:15.011 }' 00:17:15.011 [2024-07-15 15:22:24.587063] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:17:15.011 [2024-07-15 15:22:24.587111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid669618 ] 00:17:15.011 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.271 [2024-07-15 15:22:24.650667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.271 [2024-07-15 15:22:24.715323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.271 Running I/O for 10 seconds... 00:17:27.502 00:17:27.502 Latency(us) 00:17:27.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.502 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:27.502 Verification LBA range: start 0x0 length 0x1000 00:17:27.502 Nvme1n1 : 10.01 6868.07 53.66 0.00 0.00 18579.24 1638.40 27634.35 00:17:27.502 =================================================================================================================== 00:17:27.502 Total : 6868.07 53.66 0.00 0.00 18579.24 1638.40 27634.35 00:17:27.502 15:22:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=671632 00:17:27.502 15:22:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:27.502 15:22:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:27.502 15:22:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:27.502 15:22:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:27.502 15:22:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:27.502 15:22:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:27.502 15:22:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:27.502 15:22:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:27.502 { 00:17:27.503 "params": { 00:17:27.503 "name": "Nvme$subsystem", 00:17:27.503 "trtype": "$TEST_TRANSPORT", 00:17:27.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:27.503 "adrfam": "ipv4", 00:17:27.503 "trsvcid": "$NVMF_PORT", 00:17:27.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:27.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:27.503 "hdgst": ${hdgst:-false}, 00:17:27.503 "ddgst": ${ddgst:-false} 00:17:27.503 }, 00:17:27.503 "method": "bdev_nvme_attach_controller" 00:17:27.503 } 00:17:27.503 EOF 00:17:27.503 )") 00:17:27.503 15:22:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:27.503 [2024-07-15 15:22:35.038706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.038738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 15:22:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:27.503 15:22:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:27.503 15:22:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:27.503 "params": { 00:17:27.503 "name": "Nvme1", 00:17:27.503 "trtype": "tcp", 00:17:27.503 "traddr": "10.0.0.2", 00:17:27.503 "adrfam": "ipv4", 00:17:27.503 "trsvcid": "4420", 00:17:27.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:27.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:27.503 "hdgst": false, 00:17:27.503 "ddgst": false 00:17:27.503 }, 00:17:27.503 "method": "bdev_nvme_attach_controller" 00:17:27.503 }' 00:17:27.503 [2024-07-15 15:22:35.050705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.050717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.062733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.062744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.074765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.074776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.081355] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:17:27.503 [2024-07-15 15:22:35.081401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671632 ] 00:17:27.503 [2024-07-15 15:22:35.086796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.086807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.098828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.098839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.503 [2024-07-15 15:22:35.110861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.110871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.122898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.122908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.134949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.134960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.142843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.503 [2024-07-15 15:22:35.146955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.146965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.158985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.158996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.171018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.171034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.183051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.183065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.195081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.195091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.207111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.207122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.207314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.503 [2024-07-15 15:22:35.219144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.219157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.231178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.231193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.243207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.243219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.255239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.255252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.267270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.267280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.279313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.279331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.291340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.291353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.303375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.303388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.315406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.315419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.327438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.327451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.339482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.339500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 Running I/O for 5 seconds... 00:17:27.503 [2024-07-15 15:22:35.351504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.351515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.368095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.368114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.384577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.384596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.401447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.401467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.418135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.418153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.435342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.435361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.452458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.452477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.469710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.469729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.486917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.486936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.503053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.503072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.514132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.514150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.530018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.530036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.547193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.547210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.564257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.564275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.581330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.581348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.598295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.503 [2024-07-15 15:22:35.598313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.503 [2024-07-15 15:22:35.615197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.615216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.632058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.632076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.648998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.649016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.665721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.665739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.682621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.682640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.699654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.699673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.715905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.715924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.732737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.732755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.748654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.748673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.765395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.765414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.781557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.781576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.799206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.799224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.816279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.816297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.833394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.833413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.849989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.850008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.867124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.867143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.884174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.884194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.901317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.901336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.918428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.918446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.935467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.935487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.952215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.952233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.968695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.968714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:35.986178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:35.986197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.003281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.003300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.020426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.020445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.037015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.037041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.054272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.054290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.071446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.071465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.087790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.087809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.104025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.104044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.121526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.121544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.138587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.138606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.154802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.154820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.171517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.171535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.188682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.188701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.204855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.204873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.222036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.222054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.238495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.238513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.255670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.255689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.272498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.272516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.288737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.288756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.299868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.299891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.316168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.316187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.333257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.333276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.350113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.350135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.367445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.367464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.384125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.384143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.401462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.401480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.417628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.417646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.433590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.433608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.444767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.444787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.461389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.461408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.476691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.476710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.488808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.488827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.504718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.504737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.522070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.522089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.539083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.539102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.504 [2024-07-15 15:22:36.554831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.504 [2024-07-15 15:22:36.554849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.572144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.572163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.589473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.589491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.605223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.605242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.620546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.620566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.633055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.633073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.649722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.649744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.666817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.666835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.683512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.683530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.701059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.701077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.717101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.717119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.733872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.733895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.751042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.751060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.768142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.768160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.783660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.783678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.795997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.796015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.812605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.812623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.829733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.829751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.846530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.846548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.863981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.863999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.881002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.881021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.897617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.897635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.913868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.913892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.930962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.930980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.947615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.947633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.965003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.965025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.981974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.981993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:36.999172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:36.999190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:37.015707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:37.015725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:37.033399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:37.033418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:37.050896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:37.050914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:37.067986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:37.068004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:37.084923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:37.084941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:37.101164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:37.101181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.505 [2024-07-15 15:22:37.118436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.505 [2024-07-15 15:22:37.118454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.766 [2024-07-15 15:22:37.135366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.766 [2024-07-15 15:22:37.135384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.766 [2024-07-15 15:22:37.152603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.766 [2024-07-15 15:22:37.152622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.766 [2024-07-15 15:22:37.169673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.766 [2024-07-15 15:22:37.169691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.766 [2024-07-15 15:22:37.186488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.766 [2024-07-15 15:22:37.186506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.766 [2024-07-15 15:22:37.203068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.766 [2024-07-15 15:22:37.203086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.766 [2024-07-15 15:22:37.220104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.766 [2024-07-15 15:22:37.220123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.766 [2024-07-15 15:22:37.236305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.766 [2024-07-15 15:22:37.236324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.766 [2024-07-15 15:22:37.254137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.766 [2024-07-15 15:22:37.254155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.766 [2024-07-15 15:22:37.271169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.766 [2024-07-15 15:22:37.271188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.766 [2024-07-15 15:22:37.288679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.766 [2024-07-15 15:22:37.288697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.766 [2024-07-15 15:22:37.305133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.766 [2024-07-15 15:22:37.305152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.766 [2024-07-15 15:22:37.322510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.766 [2024-07-15 15:22:37.322528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.766 [2024-07-15 15:22:37.340061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.766 [2024-07-15 15:22:37.340079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.766 [2024-07-15 15:22:37.355512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.766 [2024-07-15 15:22:37.355529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.766 [2024-07-15 15:22:37.372604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.766 [2024-07-15 15:22:37.372622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.028 [2024-07-15 15:22:37.389115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.028 [2024-07-15 15:22:37.389133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.028 [2024-07-15 15:22:37.406231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.028 [2024-07-15 15:22:37.406249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.028 [2024-07-15 15:22:37.422749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.028 [2024-07-15 15:22:37.422767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.028 [2024-07-15 15:22:37.440206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.028 [2024-07-15 15:22:37.440224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.028 [2024-07-15 15:22:37.456897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.028 [2024-07-15 15:22:37.456915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.028 [2024-07-15 15:22:37.474088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.028 [2024-07-15 15:22:37.474105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.028 [2024-07-15 15:22:37.490392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.028 [2024-07-15 15:22:37.490410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.028 [2024-07-15 15:22:37.507571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.028 [2024-07-15 15:22:37.507589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.028 [2024-07-15 15:22:37.524266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.028 [2024-07-15 15:22:37.524284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.028 [2024-07-15 15:22:37.541031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.028 [2024-07-15 15:22:37.541049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.028 [2024-07-15 15:22:37.558115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.028 [2024-07-15 15:22:37.558133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.028 [2024-07-15 15:22:37.574287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.028 [2024-07-15 15:22:37.574305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.028 [2024-07-15 15:22:37.585380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.028 [2024-07-15 15:22:37.585398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.028 [2024-07-15 15:22:37.601708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.028 [2024-07-15 15:22:37.601727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.028 [2024-07-15 15:22:37.618641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.028 [2024-07-15 15:22:37.618660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.028 [2024-07-15 15:22:37.635953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.028 [2024-07-15 15:22:37.635973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.289 [2024-07-15 15:22:37.652680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.289 [2024-07-15 15:22:37.652698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.289 [2024-07-15 15:22:37.669985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.289 [2024-07-15 15:22:37.670004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.289 [2024-07-15 15:22:37.685602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.289 [2024-07-15 15:22:37.685621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.289 [2024-07-15 15:22:37.701409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.289 [2024-07-15 15:22:37.701427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.289 [2024-07-15 15:22:37.718095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.289 [2024-07-15 15:22:37.718114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.289 [2024-07-15 15:22:37.734693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.289 [2024-07-15 15:22:37.734711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.289 [2024-07-15 15:22:37.752010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.289 [2024-07-15 15:22:37.752029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.289 [2024-07-15 15:22:37.768976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.289 [2024-07-15 15:22:37.768995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.289 [2024-07-15 15:22:37.785420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.289 [2024-07-15 15:22:37.785438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.289 [2024-07-15 15:22:37.802823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.289 [2024-07-15 15:22:37.802842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.289 [2024-07-15 15:22:37.819131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.289 [2024-07-15 15:22:37.819151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.289 [2024-07-15 15:22:37.836445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.289 [2024-07-15 15:22:37.836463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.289 [2024-07-15 15:22:37.853237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.289 [2024-07-15 15:22:37.853256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.289 [2024-07-15 15:22:37.870354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.289 [2024-07-15 15:22:37.870372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.290 [2024-07-15 15:22:37.887137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.290 [2024-07-15 15:22:37.887156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.290 [2024-07-15 15:22:37.904130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.290 [2024-07-15 15:22:37.904148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.551 [2024-07-15 15:22:37.921032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.551 [2024-07-15 15:22:37.921051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.551 [2024-07-15 15:22:37.938407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.551 [2024-07-15 15:22:37.938426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.551 [2024-07-15 15:22:37.955124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.551 [2024-07-15 15:22:37.955142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.551 [2024-07-15 15:22:37.971555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.551 [2024-07-15 15:22:37.971574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.551 [2024-07-15 15:22:37.989101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.551 [2024-07-15 15:22:37.989120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.551 [2024-07-15 15:22:38.004161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.551 [2024-07-15 15:22:38.004180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.551 [2024-07-15 15:22:38.020921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.551 [2024-07-15 15:22:38.020940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.551 [2024-07-15 15:22:38.037001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.551 [2024-07-15 15:22:38.037019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.551 [2024-07-15 15:22:38.049192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.551 [2024-07-15 15:22:38.049210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.551 [2024-07-15 15:22:38.065268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.551 [2024-07-15 15:22:38.065286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.551 [2024-07-15 15:22:38.082361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.551 [2024-07-15 15:22:38.082380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.551 [2024-07-15 15:22:38.099779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.551 [2024-07-15 15:22:38.099797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.551 [2024-07-15 15:22:38.116024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.551 [2024-07-15 15:22:38.116042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.551 [2024-07-15 15:22:38.132999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.551 [2024-07-15 15:22:38.133017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.551 [2024-07-15 15:22:38.150247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.551 [2024-07-15 15:22:38.150266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.551 [2024-07-15 15:22:38.167271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.551 [2024-07-15 15:22:38.167290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.813 [2024-07-15 15:22:38.183603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.813 [2024-07-15 15:22:38.183621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.813 [2024-07-15 15:22:38.200862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.813 [2024-07-15 15:22:38.200880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.813 [2024-07-15 15:22:38.216301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.813 [2024-07-15 15:22:38.216319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.813 [2024-07-15 15:22:38.233924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.813 [2024-07-15 15:22:38.233942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.813 [2024-07-15 15:22:38.250196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.813 [2024-07-15 15:22:38.250214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.813 [2024-07-15 15:22:38.267660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.813 [2024-07-15 15:22:38.267678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.813 [2024-07-15 15:22:38.284624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.813 [2024-07-15 15:22:38.284643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.813 [2024-07-15 15:22:38.301744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.813 [2024-07-15 15:22:38.301762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.813 [2024-07-15 15:22:38.318596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.813 [2024-07-15 15:22:38.318614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.813 [2024-07-15 15:22:38.335760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.813 [2024-07-15 15:22:38.335777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.813 [2024-07-15 15:22:38.351845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.813 [2024-07-15 15:22:38.351863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.813 [2024-07-15 15:22:38.364184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.813 [2024-07-15 15:22:38.364202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.813 [2024-07-15 15:22:38.380174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.813 [2024-07-15 15:22:38.380192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.813 [2024-07-15 15:22:38.396629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.813 [2024-07-15 15:22:38.396647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.813 [2024-07-15 15:22:38.413668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.813 [2024-07-15 15:22:38.413686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:28.813 [2024-07-15 15:22:38.430618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:28.813 [2024-07-15 15:22:38.430637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.447932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.447950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.465083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.465101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.480932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.480951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.495841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.495859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.512610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.512628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.529288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.529310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.546285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.546303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.563438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.563455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.580912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.580931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.598575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.598593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.613649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.613667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.630265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.630283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.646598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.646616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.663964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.663982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.680255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.680273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.697809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.697828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.109 [2024-07-15 15:22:38.714792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.109 [2024-07-15 15:22:38.714810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.369 [2024-07-15 15:22:38.731911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.369 [2024-07-15 15:22:38.731929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.369 [2024-07-15 15:22:38.748810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.369 [2024-07-15 15:22:38.748828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.369 [2024-07-15 15:22:38.765743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.369 [2024-07-15 15:22:38.765760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.369 [2024-07-15 15:22:38.783074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.369 [2024-07-15 15:22:38.783093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.369 [2024-07-15 15:22:38.800004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.369 [2024-07-15 15:22:38.800022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.369 [2024-07-15 15:22:38.817030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.369 [2024-07-15 15:22:38.817048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.369 [2024-07-15 15:22:38.833320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.369 [2024-07-15 15:22:38.833338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.369 [2024-07-15 15:22:38.850563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.369 [2024-07-15 15:22:38.850585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.369 [2024-07-15 15:22:38.866732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.369 [2024-07-15 15:22:38.866750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.369 [2024-07-15 15:22:38.883955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.369 [2024-07-15 15:22:38.883973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.369 [2024-07-15 15:22:38.900782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.369 [2024-07-15 15:22:38.900800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.369 [2024-07-15 15:22:38.917912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.369 [2024-07-15 15:22:38.917930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.369 [2024-07-15 15:22:38.935066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.369 [2024-07-15 15:22:38.935085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.369 [2024-07-15 15:22:38.951568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.369 [2024-07-15 15:22:38.951588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.369 [2024-07-15 15:22:38.968530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.369 [2024-07-15 15:22:38.968549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.369 [2024-07-15 15:22:38.985393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.369 [2024-07-15 15:22:38.985412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.630 [2024-07-15 15:22:39.002698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.630 [2024-07-15 15:22:39.002717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.630 [2024-07-15 15:22:39.019915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.630 [2024-07-15 15:22:39.019933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.630 [2024-07-15 15:22:39.035489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.630 [2024-07-15 15:22:39.035506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.630 [2024-07-15 15:22:39.053098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.630 [2024-07-15 15:22:39.053117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.630 [2024-07-15 15:22:39.069313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.630 [2024-07-15 15:22:39.069332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.630 [2024-07-15 15:22:39.086148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.630 [2024-07-15 15:22:39.086166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.630 [2024-07-15 15:22:39.103086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.630 [2024-07-15 15:22:39.103104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.631 [2024-07-15 15:22:39.119968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.631 [2024-07-15 15:22:39.119986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.631 [2024-07-15 15:22:39.136818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.631 [2024-07-15 15:22:39.136837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.631 [2024-07-15 15:22:39.154341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.631 [2024-07-15 15:22:39.154360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.631 [2024-07-15 15:22:39.170646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.631 [2024-07-15 15:22:39.170668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.631 [2024-07-15 15:22:39.187465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.631 [2024-07-15 15:22:39.187484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.631 [2024-07-15 15:22:39.204728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.631 [2024-07-15 15:22:39.204746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.631 [2024-07-15 15:22:39.221075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.631 [2024-07-15 15:22:39.221093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.631 [2024-07-15 15:22:39.238126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.631 [2024-07-15 15:22:39.238145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.255485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.255504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.271949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.271967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.289332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.289351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.304980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.304999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.320289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.320308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.331814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.331833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.348180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.348198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.364544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.364562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.382429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.382448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.397798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.397817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.410447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.410466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.425987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.426006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.443375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.443393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.460017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.460035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.477197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.477219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.494510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.494528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:29.892 [2024-07-15 15:22:39.510860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:29.892 [2024-07-15 15:22:39.510878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.152 [2024-07-15 15:22:39.527941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.152 [2024-07-15 15:22:39.527960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.152 [2024-07-15 15:22:39.544979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.152 [2024-07-15 15:22:39.544998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.152 [2024-07-15 15:22:39.561714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.152 [2024-07-15 15:22:39.561732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.152 [2024-07-15 15:22:39.578722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.152 [2024-07-15 15:22:39.578741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.152 [2024-07-15 15:22:39.595012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.152 [2024-07-15 15:22:39.595031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.152 [2024-07-15 15:22:39.612207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.152 [2024-07-15 15:22:39.612225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.152 [2024-07-15 15:22:39.629653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.152 [2024-07-15 15:22:39.629672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.152 [2024-07-15 15:22:39.644896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.152 [2024-07-15 15:22:39.644914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.152 [2024-07-15 15:22:39.660455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.152 [2024-07-15 15:22:39.660473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.152 [2024-07-15 15:22:39.677717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.152 [2024-07-15 15:22:39.677735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.152 [2024-07-15 15:22:39.694272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.152 [2024-07-15 15:22:39.694291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.152 [2024-07-15 15:22:39.711554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.152 [2024-07-15 15:22:39.711572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.152 [2024-07-15 15:22:39.728300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.152 [2024-07-15 15:22:39.728319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.152 [2024-07-15 15:22:39.745274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.152 [2024-07-15 15:22:39.745293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.152 [2024-07-15 15:22:39.762384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.152 [2024-07-15 15:22:39.762403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.413 [2024-07-15 15:22:39.778527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.413 [2024-07-15 15:22:39.778546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.413 [2024-07-15 15:22:39.789944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.413 [2024-07-15 15:22:39.789963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.413 [2024-07-15 15:22:39.806162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.413 [2024-07-15 15:22:39.806181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.413 [2024-07-15 15:22:39.822659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.413 [2024-07-15 15:22:39.822677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.413 [2024-07-15 15:22:39.839721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.413 [2024-07-15 15:22:39.839740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.413 [2024-07-15 15:22:39.855948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.413 [2024-07-15 15:22:39.855967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.413 [2024-07-15 15:22:39.873096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.413 [2024-07-15 15:22:39.873114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.413 [2024-07-15 15:22:39.889994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.413 [2024-07-15 15:22:39.890012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.413 [2024-07-15 15:22:39.906529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.413 [2024-07-15 15:22:39.906547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.413 [2024-07-15 15:22:39.922865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.413 [2024-07-15 15:22:39.922891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.413 [2024-07-15 15:22:39.940158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.413 [2024-07-15 15:22:39.940176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.413 [2024-07-15 15:22:39.957460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.413 [2024-07-15 15:22:39.957479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.413 [2024-07-15 15:22:39.974019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.413 [2024-07-15 15:22:39.974038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.413 [2024-07-15 15:22:39.990495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.413 [2024-07-15 15:22:39.990514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.413 [2024-07-15 15:22:40.007645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.413 [2024-07-15 15:22:40.007664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.413 [2024-07-15 15:22:40.023282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.413 [2024-07-15 15:22:40.023304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.674 [2024-07-15 15:22:40.040316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.674 [2024-07-15 15:22:40.040335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.674 [2024-07-15 15:22:40.056424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.674 [2024-07-15 15:22:40.056443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.674 [2024-07-15 15:22:40.072961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.674 [2024-07-15 15:22:40.072980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.674 [2024-07-15 15:22:40.089504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.674 [2024-07-15 15:22:40.089523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.674 [2024-07-15 15:22:40.106243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.674 [2024-07-15 15:22:40.106261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.674 [2024-07-15 15:22:40.123282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.674 [2024-07-15 15:22:40.123301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.674 [2024-07-15 15:22:40.139978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.674 [2024-07-15 15:22:40.139997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.674 [2024-07-15 15:22:40.156303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.674 [2024-07-15 15:22:40.156322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.674 [2024-07-15 15:22:40.173624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.674 [2024-07-15 15:22:40.173642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.674 [2024-07-15 15:22:40.190621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.674 [2024-07-15 15:22:40.190640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.674 [2024-07-15 15:22:40.207613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.674 [2024-07-15 15:22:40.207630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.674 [2024-07-15 15:22:40.224736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.674 [2024-07-15 15:22:40.224754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.674 [2024-07-15 15:22:40.241395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.674 [2024-07-15 15:22:40.241413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.674 [2024-07-15 15:22:40.258305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.674 [2024-07-15 15:22:40.258323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.674 [2024-07-15 15:22:40.275403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.674 [2024-07-15 15:22:40.275421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.674 [2024-07-15 15:22:40.291778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.674 [2024-07-15 15:22:40.291796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 [2024-07-15 15:22:40.309165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.934 [2024-07-15 15:22:40.309184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 [2024-07-15 15:22:40.325631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.934 [2024-07-15 15:22:40.325650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 [2024-07-15 15:22:40.342995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.934 [2024-07-15 15:22:40.343014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 [2024-07-15 15:22:40.360173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.934 [2024-07-15 15:22:40.360192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 00:17:30.934 Latency(us) 00:17:30.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.934 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:30.934 Nvme1n1 : 5.01 13567.17 105.99 0.00 0.00 9425.02 4205.23 23265.28 00:17:30.934 =================================================================================================================== 00:17:30.934 Total : 13567.17 105.99 0.00 0.00 9425.02 4205.23 23265.28 00:17:30.934 [2024-07-15 15:22:40.371912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.934 [2024-07-15 15:22:40.371930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 [2024-07-15 15:22:40.383938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.934 [2024-07-15 15:22:40.383953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 [2024-07-15 15:22:40.395973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.934 [2024-07-15 15:22:40.395987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 [2024-07-15 15:22:40.408005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.934 [2024-07-15 15:22:40.408018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 [2024-07-15 15:22:40.420034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.934 [2024-07-15 15:22:40.420046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 [2024-07-15 15:22:40.432065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.934 [2024-07-15 15:22:40.432076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 [2024-07-15 15:22:40.444096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.934 [2024-07-15 15:22:40.444107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 [2024-07-15 15:22:40.456128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.934 [2024-07-15 15:22:40.456139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 [2024-07-15 15:22:40.468163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.934 [2024-07-15 15:22:40.468174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 [2024-07-15 15:22:40.480197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.934 [2024-07-15 15:22:40.480210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 [2024-07-15 15:22:40.492226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.934 [2024-07-15 15:22:40.492238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 [2024-07-15 15:22:40.504256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:30.934 [2024-07-15 15:22:40.504267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:30.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (671632) - No such process 00:17:30.934 15:22:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 671632 00:17:30.934 15:22:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:30.934 15:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.934 15:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:30.934 15:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.934 15:22:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:30.934 15:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.934 15:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:30.934 delay0 00:17:30.934 15:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.934 15:22:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:30.934 15:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.934 15:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:30.934 15:22:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.934 15:22:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:31.194 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.194 [2024-07-15 15:22:40.645304] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:39.323 Initializing NVMe Controllers 00:17:39.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:39.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:39.323 Initialization complete. Launching workers. 00:17:39.323 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 234, failed: 25805 00:17:39.323 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 25917, failed to submit 122 00:17:39.323 success 25835, unsuccess 82, failed 0 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:39.323 rmmod nvme_tcp 00:17:39.323 rmmod nvme_fabrics 00:17:39.323 rmmod nvme_keyring 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 669270 ']' 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 669270 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 669270 ']' 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 669270 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 669270 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 669270' 00:17:39.323 killing process with pid 669270 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 669270 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 669270 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.323 15:22:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.704 15:22:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:40.704 00:17:40.704 real 0m34.347s 00:17:40.704 user 0m45.421s 00:17:40.704 sys 0m11.078s 00:17:40.704 15:22:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:40.704 15:22:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:40.704 ************************************ 00:17:40.704 END TEST nvmf_zcopy 00:17:40.704 ************************************ 00:17:40.704 15:22:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:40.704 15:22:50 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:40.704 15:22:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:40.704 15:22:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.704 15:22:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:40.704 ************************************ 00:17:40.704 START TEST nvmf_nmic 00:17:40.704 ************************************ 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:40.704 * Looking for test storage... 00:17:40.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.704 15:22:50 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:40.705 15:22:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:48.838 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:48.838 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:48.838 Found net devices under 0000:31:00.0: cvl_0_0 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:48.838 Found net devices under 0000:31:00.1: cvl_0_1 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:48.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:17:48.838 00:17:48.838 --- 10.0.0.2 ping statistics --- 00:17:48.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.838 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:48.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:17:48.838 00:17:48.838 --- 10.0.0.1 ping statistics --- 00:17:48.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.838 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.838 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=678542 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 678542 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 678542 ']' 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.839 15:22:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:48.839 [2024-07-15 15:22:57.859447] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:17:48.839 [2024-07-15 15:22:57.859512] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.839 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.839 [2024-07-15 15:22:57.935423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:48.839 [2024-07-15 15:22:58.010405] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.839 [2024-07-15 15:22:58.010445] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.839 [2024-07-15 15:22:58.010453] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.839 [2024-07-15 15:22:58.010459] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.839 [2024-07-15 15:22:58.010465] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.839 [2024-07-15 15:22:58.010574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.839 [2024-07-15 15:22:58.010690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.839 [2024-07-15 15:22:58.010846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.839 [2024-07-15 15:22:58.010848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.099 [2024-07-15 15:22:58.689506] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.099 Malloc0 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.099 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.359 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.359 15:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:49.359 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.359 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.359 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.360 [2024-07-15 15:22:58.748850] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:49.360 test case1: single bdev can't be used in multiple subsystems 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.360 [2024-07-15 15:22:58.784815] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:49.360 [2024-07-15 15:22:58.784836] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:49.360 [2024-07-15 15:22:58.784844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:49.360 request: 00:17:49.360 { 00:17:49.360 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:49.360 "namespace": { 00:17:49.360 "bdev_name": "Malloc0", 00:17:49.360 "no_auto_visible": false 00:17:49.360 }, 00:17:49.360 "method": "nvmf_subsystem_add_ns", 00:17:49.360 "req_id": 1 00:17:49.360 } 00:17:49.360 Got JSON-RPC error response 00:17:49.360 response: 00:17:49.360 { 00:17:49.360 "code": -32602, 00:17:49.360 "message": "Invalid parameters" 00:17:49.360 } 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:49.360 Adding namespace failed - expected result. 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:49.360 test case2: host connect to nvmf target in multiple paths 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:49.360 [2024-07-15 15:22:58.796937] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.360 15:22:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:50.740 15:23:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:52.648 15:23:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:52.648 15:23:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:17:52.648 15:23:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.648 15:23:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:52.648 15:23:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:17:54.557 15:23:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:54.557 15:23:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:54.557 15:23:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:54.557 15:23:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:54.557 15:23:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:54.557 15:23:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:17:54.557 15:23:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:54.557 [global] 00:17:54.557 thread=1 00:17:54.557 invalidate=1 00:17:54.557 rw=write 00:17:54.557 time_based=1 00:17:54.557 runtime=1 00:17:54.557 ioengine=libaio 00:17:54.557 direct=1 00:17:54.557 bs=4096 00:17:54.557 iodepth=1 00:17:54.557 norandommap=0 00:17:54.557 numjobs=1 00:17:54.557 00:17:54.557 verify_dump=1 00:17:54.557 verify_backlog=512 00:17:54.557 verify_state_save=0 00:17:54.557 do_verify=1 00:17:54.557 verify=crc32c-intel 00:17:54.557 [job0] 00:17:54.557 filename=/dev/nvme0n1 00:17:54.557 Could not set queue depth (nvme0n1) 00:17:54.817 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:54.817 fio-3.35 00:17:54.817 Starting 1 thread 00:17:55.756 00:17:55.756 job0: (groupid=0, jobs=1): err= 0: pid=679946: Mon Jul 15 15:23:05 2024 00:17:55.756 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:55.756 slat (nsec): min=7180, max=56033, avg=24786.54, stdev=3399.00 00:17:55.756 clat (usec): min=476, max=1203, avg=1003.65, stdev=80.51 00:17:55.756 lat (usec): min=501, max=1228, avg=1028.43, stdev=80.77 00:17:55.756 clat percentiles (usec): 00:17:55.756 | 1.00th=[ 734], 5.00th=[ 840], 10.00th=[ 906], 20.00th=[ 955], 00:17:55.756 | 30.00th=[ 988], 40.00th=[ 1012], 50.00th=[ 1020], 60.00th=[ 1037], 00:17:55.756 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1074], 95.00th=[ 1090], 00:17:55.756 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1205], 99.95th=[ 1205], 00:17:55.756 | 99.99th=[ 1205] 00:17:55.756 write: IOPS=859, BW=3437KiB/s (3519kB/s)(3440KiB/1001msec); 0 zone resets 00:17:55.756 slat (usec): min=9, max=27099, avg=58.67, stdev=923.21 00:17:55.756 clat (usec): min=178, max=806, avg=480.53, stdev=102.17 00:17:55.756 lat (usec): min=211, max=27737, avg=539.20, stdev=934.71 00:17:55.756 clat percentiles (usec): 00:17:55.756 | 1.00th=[ 253], 5.00th=[ 334], 10.00th=[ 355], 20.00th=[ 379], 00:17:55.756 | 30.00th=[ 429], 40.00th=[ 469], 50.00th=[ 486], 60.00th=[ 498], 00:17:55.756 | 70.00th=[ 523], 80.00th=[ 553], 90.00th=[ 619], 95.00th=[ 668], 00:17:55.756 | 99.00th=[ 717], 99.50th=[ 742], 99.90th=[ 807], 99.95th=[ 807], 00:17:55.756 | 99.99th=[ 807] 00:17:55.756 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:55.756 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:55.756 lat (usec) : 250=0.58%, 500=37.32%, 750=24.93%, 1000=12.54% 00:17:55.756 lat (msec) : 2=24.64% 00:17:55.756 cpu : usr=1.30%, sys=4.40%, ctx=1375, majf=0, minf=1 00:17:55.756 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:55.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.757 issued rwts: total=512,860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.757 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:55.757 00:17:55.757 Run status group 0 (all jobs): 00:17:55.757 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:17:55.757 WRITE: bw=3437KiB/s (3519kB/s), 3437KiB/s-3437KiB/s (3519kB/s-3519kB/s), io=3440KiB (3523kB), run=1001-1001msec 00:17:55.757 00:17:55.757 Disk stats (read/write): 00:17:55.757 nvme0n1: ios=537/676, merge=0/0, ticks=1475/313, in_queue=1788, util=98.80% 00:17:55.757 15:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:56.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:56.017 rmmod nvme_tcp 00:17:56.017 rmmod nvme_fabrics 00:17:56.017 rmmod nvme_keyring 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 678542 ']' 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 678542 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 678542 ']' 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 678542 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 678542 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 678542' 00:17:56.017 killing process with pid 678542 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 678542 00:17:56.017 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 678542 00:17:56.277 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:56.277 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:56.277 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:56.277 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:56.277 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:56.277 15:23:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.277 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.277 15:23:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.830 15:23:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:58.830 00:17:58.830 real 0m17.708s 00:17:58.830 user 0m44.766s 00:17:58.830 sys 0m6.278s 00:17:58.830 15:23:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:58.830 15:23:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:58.830 ************************************ 00:17:58.830 END TEST nvmf_nmic 00:17:58.830 ************************************ 00:17:58.830 15:23:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:58.830 15:23:07 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:58.830 15:23:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:58.830 15:23:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:58.830 15:23:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:58.830 ************************************ 00:17:58.830 START TEST nvmf_fio_target 00:17:58.830 ************************************ 00:17:58.830 15:23:07 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:58.830 * Looking for test storage... 00:17:58.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.830 15:23:08 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:58.831 15:23:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:07.002 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:07.002 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:07.002 Found net devices under 0000:31:00.0: cvl_0_0 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:07.002 Found net devices under 0000:31:00.1: cvl_0_1 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:07.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:18:07.002 00:18:07.002 --- 10.0.0.2 ping statistics --- 00:18:07.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.002 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:07.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:18:07.002 00:18:07.002 --- 10.0.0.1 ping statistics --- 00:18:07.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.002 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=684652 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 684652 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 684652 ']' 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.002 15:23:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.002 [2024-07-15 15:23:15.791266] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:18:07.002 [2024-07-15 15:23:15.791326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.003 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.003 [2024-07-15 15:23:15.869694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:07.003 [2024-07-15 15:23:15.943549] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.003 [2024-07-15 15:23:15.943589] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.003 [2024-07-15 15:23:15.943596] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.003 [2024-07-15 15:23:15.943603] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.003 [2024-07-15 15:23:15.943609] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.003 [2024-07-15 15:23:15.943724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.003 [2024-07-15 15:23:15.943861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.003 [2024-07-15 15:23:15.943934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.003 [2024-07-15 15:23:15.943935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:07.003 15:23:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.003 15:23:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:18:07.003 15:23:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:07.003 15:23:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:07.003 15:23:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.003 15:23:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.003 15:23:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:07.262 [2024-07-15 15:23:16.755030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.262 15:23:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:07.522 15:23:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:07.522 15:23:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:07.782 15:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:07.782 15:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:07.782 15:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:07.782 15:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:08.042 15:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:08.042 15:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:08.303 15:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:08.303 15:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:08.303 15:23:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:08.563 15:23:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:08.563 15:23:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:08.825 15:23:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:08.825 15:23:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:08.825 15:23:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:09.086 15:23:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:09.086 15:23:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:09.086 15:23:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:09.086 15:23:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:09.346 15:23:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.607 [2024-07-15 15:23:18.996606] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.607 15:23:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:09.607 15:23:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:09.867 15:23:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:11.249 15:23:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:11.249 15:23:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:18:11.249 15:23:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.249 15:23:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:18:11.249 15:23:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:18:11.249 15:23:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:18:13.788 15:23:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:13.788 15:23:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:13.788 15:23:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:13.788 15:23:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:18:13.788 15:23:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:13.788 15:23:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:18:13.788 15:23:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:13.788 [global] 00:18:13.788 thread=1 00:18:13.788 invalidate=1 00:18:13.788 rw=write 00:18:13.788 time_based=1 00:18:13.788 runtime=1 00:18:13.788 ioengine=libaio 00:18:13.788 direct=1 00:18:13.788 bs=4096 00:18:13.788 iodepth=1 00:18:13.788 norandommap=0 00:18:13.788 numjobs=1 00:18:13.788 00:18:13.788 verify_dump=1 00:18:13.788 verify_backlog=512 00:18:13.788 verify_state_save=0 00:18:13.788 do_verify=1 00:18:13.788 verify=crc32c-intel 00:18:13.788 [job0] 00:18:13.788 filename=/dev/nvme0n1 00:18:13.788 [job1] 00:18:13.788 filename=/dev/nvme0n2 00:18:13.788 [job2] 00:18:13.788 filename=/dev/nvme0n3 00:18:13.788 [job3] 00:18:13.788 filename=/dev/nvme0n4 00:18:13.788 Could not set queue depth (nvme0n1) 00:18:13.788 Could not set queue depth (nvme0n2) 00:18:13.788 Could not set queue depth (nvme0n3) 00:18:13.788 Could not set queue depth (nvme0n4) 00:18:13.788 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:13.788 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:13.788 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:13.788 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:13.788 fio-3.35 00:18:13.788 Starting 4 threads 00:18:15.191 00:18:15.191 job0: (groupid=0, jobs=1): err= 0: pid=686262: Mon Jul 15 15:23:24 2024 00:18:15.191 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:15.191 slat (nsec): min=25600, max=60634, avg=26973.76, stdev=4151.01 00:18:15.191 clat (usec): min=820, max=1375, avg=1099.97, stdev=95.67 00:18:15.191 lat (usec): min=846, max=1401, avg=1126.95, stdev=95.37 00:18:15.191 clat percentiles (usec): 00:18:15.191 | 1.00th=[ 848], 5.00th=[ 938], 10.00th=[ 979], 20.00th=[ 1020], 00:18:15.191 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1139], 00:18:15.191 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1254], 00:18:15.191 | 99.00th=[ 1319], 99.50th=[ 1352], 99.90th=[ 1369], 99.95th=[ 1369], 00:18:15.191 | 99.99th=[ 1369] 00:18:15.191 write: IOPS=618, BW=2474KiB/s (2533kB/s)(2476KiB/1001msec); 0 zone resets 00:18:15.191 slat (nsec): min=9151, max=54297, avg=30950.63, stdev=9648.61 00:18:15.191 clat (usec): min=253, max=1008, avg=637.29, stdev=122.31 00:18:15.191 lat (usec): min=263, max=1042, avg=668.24, stdev=125.79 00:18:15.191 clat percentiles (usec): 00:18:15.192 | 1.00th=[ 338], 5.00th=[ 420], 10.00th=[ 482], 20.00th=[ 537], 00:18:15.192 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 668], 00:18:15.192 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 832], 00:18:15.192 | 99.00th=[ 922], 99.50th=[ 963], 99.90th=[ 1012], 99.95th=[ 1012], 00:18:15.192 | 99.99th=[ 1012] 00:18:15.192 bw ( KiB/s): min= 4096, max= 4096, per=36.47%, avg=4096.00, stdev= 0.00, samples=1 00:18:15.192 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:15.192 lat (usec) : 500=7.16%, 750=38.99%, 1000=14.50% 00:18:15.192 lat (msec) : 2=39.35% 00:18:15.192 cpu : usr=1.80%, sys=5.00%, ctx=1133, majf=0, minf=1 00:18:15.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:15.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.192 issued rwts: total=512,619,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:15.192 job1: (groupid=0, jobs=1): err= 0: pid=686264: Mon Jul 15 15:23:24 2024 00:18:15.192 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:15.192 slat (nsec): min=7383, max=45941, avg=27542.57, stdev=3667.84 00:18:15.192 clat (usec): min=634, max=1542, avg=1021.66, stdev=137.40 00:18:15.192 lat (usec): min=661, max=1569, avg=1049.20, stdev=137.08 00:18:15.192 clat percentiles (usec): 00:18:15.192 | 1.00th=[ 717], 5.00th=[ 807], 10.00th=[ 848], 20.00th=[ 889], 00:18:15.192 | 30.00th=[ 930], 40.00th=[ 988], 50.00th=[ 1029], 60.00th=[ 1057], 00:18:15.192 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1205], 95.00th=[ 1237], 00:18:15.192 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1549], 99.95th=[ 1549], 00:18:15.192 | 99.99th=[ 1549] 00:18:15.192 write: IOPS=716, BW=2865KiB/s (2934kB/s)(2868KiB/1001msec); 0 zone resets 00:18:15.192 slat (nsec): min=8992, max=67144, avg=31175.87, stdev=9993.83 00:18:15.192 clat (usec): min=268, max=2341, avg=599.27, stdev=137.76 00:18:15.192 lat (usec): min=279, max=2376, avg=630.44, stdev=140.80 00:18:15.192 clat percentiles (usec): 00:18:15.192 | 1.00th=[ 351], 5.00th=[ 388], 10.00th=[ 441], 20.00th=[ 490], 00:18:15.192 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:18:15.192 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 799], 00:18:15.192 | 99.00th=[ 857], 99.50th=[ 914], 99.90th=[ 2343], 99.95th=[ 2343], 00:18:15.192 | 99.99th=[ 2343] 00:18:15.192 bw ( KiB/s): min= 4096, max= 4096, per=36.47%, avg=4096.00, stdev= 0.00, samples=1 00:18:15.192 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:15.192 lat (usec) : 500=13.91%, 750=39.46%, 1000=22.78% 00:18:15.192 lat (msec) : 2=23.76%, 4=0.08% 00:18:15.192 cpu : usr=1.70%, sys=5.80%, ctx=1230, majf=0, minf=1 00:18:15.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:15.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.192 issued rwts: total=512,717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:15.192 job2: (groupid=0, jobs=1): err= 0: pid=686281: Mon Jul 15 15:23:24 2024 00:18:15.192 read: IOPS=561, BW=2246KiB/s (2300kB/s)(2248KiB/1001msec) 00:18:15.192 slat (nsec): min=6991, max=50518, avg=25739.67, stdev=5028.31 00:18:15.192 clat (usec): min=312, max=1573, avg=733.35, stdev=146.21 00:18:15.192 lat (usec): min=320, max=1599, avg=759.09, stdev=146.81 00:18:15.192 clat percentiles (usec): 00:18:15.192 | 1.00th=[ 404], 5.00th=[ 478], 10.00th=[ 545], 20.00th=[ 594], 00:18:15.192 | 30.00th=[ 635], 40.00th=[ 693], 50.00th=[ 758], 60.00th=[ 807], 00:18:15.192 | 70.00th=[ 848], 80.00th=[ 873], 90.00th=[ 898], 95.00th=[ 914], 00:18:15.192 | 99.00th=[ 955], 99.50th=[ 988], 99.90th=[ 1582], 99.95th=[ 1582], 00:18:15.192 | 99.99th=[ 1582] 00:18:15.192 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:18:15.192 slat (usec): min=9, max=19800, avg=50.22, stdev=618.90 00:18:15.192 clat (usec): min=113, max=1083, avg=497.16, stdev=152.51 00:18:15.192 lat (usec): min=123, max=20529, avg=547.39, stdev=645.04 00:18:15.192 clat percentiles (usec): 00:18:15.192 | 1.00th=[ 215], 5.00th=[ 255], 10.00th=[ 318], 20.00th=[ 351], 00:18:15.192 | 30.00th=[ 396], 40.00th=[ 445], 50.00th=[ 482], 60.00th=[ 537], 00:18:15.192 | 70.00th=[ 594], 80.00th=[ 644], 90.00th=[ 709], 95.00th=[ 750], 00:18:15.192 | 99.00th=[ 824], 99.50th=[ 848], 99.90th=[ 930], 99.95th=[ 1090], 00:18:15.192 | 99.99th=[ 1090] 00:18:15.192 bw ( KiB/s): min= 4096, max= 4096, per=36.47%, avg=4096.00, stdev= 0.00, samples=1 00:18:15.192 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:15.192 lat (usec) : 250=2.84%, 500=33.67%, 750=41.99%, 1000=21.31% 00:18:15.192 lat (msec) : 2=0.19% 00:18:15.192 cpu : usr=2.00%, sys=4.90%, ctx=1590, majf=0, minf=1 00:18:15.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:15.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.192 issued rwts: total=562,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:15.192 job3: (groupid=0, jobs=1): err= 0: pid=686287: Mon Jul 15 15:23:24 2024 00:18:15.192 read: IOPS=15, BW=62.6KiB/s (64.1kB/s)(64.0KiB/1023msec) 00:18:15.192 slat (nsec): min=9735, max=26166, avg=24739.75, stdev=4008.50 00:18:15.192 clat (usec): min=1250, max=42904, avg=39484.30, stdev=10198.37 00:18:15.192 lat (usec): min=1260, max=42929, avg=39509.04, stdev=10202.37 00:18:15.192 clat percentiles (usec): 00:18:15.192 | 1.00th=[ 1254], 5.00th=[ 1254], 10.00th=[41681], 20.00th=[41681], 00:18:15.192 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:18:15.192 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:18:15.192 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:15.192 | 99.99th=[42730] 00:18:15.192 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:18:15.192 slat (nsec): min=9788, max=54086, avg=30635.70, stdev=9673.13 00:18:15.192 clat (usec): min=240, max=961, avg=723.67, stdev=111.55 00:18:15.192 lat (usec): min=252, max=983, avg=754.31, stdev=116.03 00:18:15.192 clat percentiles (usec): 00:18:15.192 | 1.00th=[ 429], 5.00th=[ 506], 10.00th=[ 562], 20.00th=[ 652], 00:18:15.192 | 30.00th=[ 685], 40.00th=[ 709], 50.00th=[ 734], 60.00th=[ 766], 00:18:15.192 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 848], 95.00th=[ 873], 00:18:15.192 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 963], 99.95th=[ 963], 00:18:15.192 | 99.99th=[ 963] 00:18:15.192 bw ( KiB/s): min= 4096, max= 4096, per=36.47%, avg=4096.00, stdev= 0.00, samples=1 00:18:15.192 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:15.192 lat (usec) : 250=0.19%, 500=4.55%, 750=49.62%, 1000=42.61% 00:18:15.192 lat (msec) : 2=0.19%, 50=2.84% 00:18:15.192 cpu : usr=1.08%, sys=1.17%, ctx=529, majf=0, minf=1 00:18:15.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:15.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.192 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:15.192 00:18:15.192 Run status group 0 (all jobs): 00:18:15.192 READ: bw=6264KiB/s (6414kB/s), 62.6KiB/s-2246KiB/s (64.1kB/s-2300kB/s), io=6408KiB (6562kB), run=1001-1023msec 00:18:15.192 WRITE: bw=11.0MiB/s (11.5MB/s), 2002KiB/s-4092KiB/s (2050kB/s-4190kB/s), io=11.2MiB (11.8MB), run=1001-1023msec 00:18:15.192 00:18:15.192 Disk stats (read/write): 00:18:15.192 nvme0n1: ios=457/512, merge=0/0, ticks=1268/256, in_queue=1524, util=84.57% 00:18:15.192 nvme0n2: ios=502/512, merge=0/0, ticks=1320/242, in_queue=1562, util=88.58% 00:18:15.192 nvme0n3: ios=572/762, merge=0/0, ticks=541/381, in_queue=922, util=93.06% 00:18:15.192 nvme0n4: ios=68/512, merge=0/0, ticks=1090/344, in_queue=1434, util=94.05% 00:18:15.192 15:23:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:15.192 [global] 00:18:15.192 thread=1 00:18:15.192 invalidate=1 00:18:15.192 rw=randwrite 00:18:15.193 time_based=1 00:18:15.193 runtime=1 00:18:15.193 ioengine=libaio 00:18:15.193 direct=1 00:18:15.193 bs=4096 00:18:15.193 iodepth=1 00:18:15.193 norandommap=0 00:18:15.193 numjobs=1 00:18:15.193 00:18:15.193 verify_dump=1 00:18:15.193 verify_backlog=512 00:18:15.193 verify_state_save=0 00:18:15.193 do_verify=1 00:18:15.193 verify=crc32c-intel 00:18:15.193 [job0] 00:18:15.193 filename=/dev/nvme0n1 00:18:15.193 [job1] 00:18:15.193 filename=/dev/nvme0n2 00:18:15.193 [job2] 00:18:15.193 filename=/dev/nvme0n3 00:18:15.193 [job3] 00:18:15.193 filename=/dev/nvme0n4 00:18:15.193 Could not set queue depth (nvme0n1) 00:18:15.193 Could not set queue depth (nvme0n2) 00:18:15.193 Could not set queue depth (nvme0n3) 00:18:15.193 Could not set queue depth (nvme0n4) 00:18:15.462 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:15.462 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:15.462 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:15.462 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:15.462 fio-3.35 00:18:15.462 Starting 4 threads 00:18:16.881 00:18:16.881 job0: (groupid=0, jobs=1): err= 0: pid=686784: Mon Jul 15 15:23:26 2024 00:18:16.881 read: IOPS=512, BW=2051KiB/s (2100kB/s)(2104KiB/1026msec) 00:18:16.881 slat (nsec): min=6195, max=49924, avg=20794.22, stdev=7841.18 00:18:16.881 clat (usec): min=553, max=42709, avg=1058.21, stdev=3138.51 00:18:16.881 lat (usec): min=590, max=42734, avg=1079.01, stdev=3138.97 00:18:16.881 clat percentiles (usec): 00:18:16.881 | 1.00th=[ 611], 5.00th=[ 668], 10.00th=[ 693], 20.00th=[ 725], 00:18:16.881 | 30.00th=[ 758], 40.00th=[ 791], 50.00th=[ 824], 60.00th=[ 857], 00:18:16.881 | 70.00th=[ 889], 80.00th=[ 914], 90.00th=[ 955], 95.00th=[ 979], 00:18:16.881 | 99.00th=[ 1090], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:18:16.882 | 99.99th=[42730] 00:18:16.882 write: IOPS=998, BW=3992KiB/s (4088kB/s)(4096KiB/1026msec); 0 zone resets 00:18:16.882 slat (nsec): min=8418, max=52396, avg=22862.17, stdev=9761.16 00:18:16.882 clat (usec): min=211, max=943, avg=415.26, stdev=137.30 00:18:16.882 lat (usec): min=228, max=976, avg=438.12, stdev=141.90 00:18:16.882 clat percentiles (usec): 00:18:16.882 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 255], 20.00th=[ 277], 00:18:16.882 | 30.00th=[ 334], 40.00th=[ 363], 50.00th=[ 396], 60.00th=[ 437], 00:18:16.882 | 70.00th=[ 469], 80.00th=[ 519], 90.00th=[ 594], 95.00th=[ 668], 00:18:16.882 | 99.00th=[ 832], 99.50th=[ 848], 99.90th=[ 873], 99.95th=[ 947], 00:18:16.882 | 99.99th=[ 947] 00:18:16.882 bw ( KiB/s): min= 4096, max= 4096, per=33.06%, avg=4096.00, stdev= 0.00, samples=2 00:18:16.882 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:18:16.882 lat (usec) : 250=5.35%, 500=45.74%, 750=21.87%, 1000=25.87% 00:18:16.882 lat (msec) : 2=0.97%, 50=0.19% 00:18:16.882 cpu : usr=2.93%, sys=4.10%, ctx=1550, majf=0, minf=1 00:18:16.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.882 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.882 job1: (groupid=0, jobs=1): err= 0: pid=686791: Mon Jul 15 15:23:26 2024 00:18:16.882 read: IOPS=18, BW=73.0KiB/s (74.8kB/s)(76.0KiB/1041msec) 00:18:16.882 slat (nsec): min=10219, max=27080, avg=25305.53, stdev=3667.15 00:18:16.882 clat (usec): min=889, max=42011, avg=39447.56, stdev=9348.63 00:18:16.882 lat (usec): min=900, max=42037, avg=39472.87, stdev=9352.27 00:18:16.882 clat percentiles (usec): 00:18:16.882 | 1.00th=[ 889], 5.00th=[ 889], 10.00th=[41157], 20.00th=[41157], 00:18:16.882 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:18:16.882 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:16.882 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:16.882 | 99.99th=[42206] 00:18:16.882 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:18:16.882 slat (nsec): min=8305, max=66020, avg=28020.04, stdev=9926.85 00:18:16.882 clat (usec): min=245, max=1215, avg=532.76, stdev=128.35 00:18:16.882 lat (usec): min=257, max=1247, avg=560.78, stdev=132.88 00:18:16.882 clat percentiles (usec): 00:18:16.882 | 1.00th=[ 273], 5.00th=[ 310], 10.00th=[ 375], 20.00th=[ 412], 00:18:16.882 | 30.00th=[ 457], 40.00th=[ 498], 50.00th=[ 537], 60.00th=[ 578], 00:18:16.882 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 693], 95.00th=[ 725], 00:18:16.882 | 99.00th=[ 783], 99.50th=[ 807], 99.90th=[ 1221], 99.95th=[ 1221], 00:18:16.882 | 99.99th=[ 1221] 00:18:16.882 bw ( KiB/s): min= 4096, max= 4096, per=33.06%, avg=4096.00, stdev= 0.00, samples=1 00:18:16.882 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:16.882 lat (usec) : 250=0.19%, 500=38.61%, 750=54.80%, 1000=2.82% 00:18:16.882 lat (msec) : 2=0.19%, 50=3.39% 00:18:16.882 cpu : usr=1.35%, sys=1.44%, ctx=531, majf=0, minf=1 00:18:16.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.882 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.882 job2: (groupid=0, jobs=1): err= 0: pid=686801: Mon Jul 15 15:23:26 2024 00:18:16.882 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:18:16.882 slat (nsec): min=7448, max=45672, avg=27395.82, stdev=1963.46 00:18:16.882 clat (usec): min=598, max=42553, avg=1144.09, stdev=2573.78 00:18:16.882 lat (usec): min=625, max=42581, avg=1171.49, stdev=2573.77 00:18:16.882 clat percentiles (usec): 00:18:16.882 | 1.00th=[ 709], 5.00th=[ 816], 10.00th=[ 857], 20.00th=[ 898], 00:18:16.882 | 30.00th=[ 930], 40.00th=[ 963], 50.00th=[ 996], 60.00th=[ 1020], 00:18:16.882 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1156], 00:18:16.882 | 99.00th=[ 1221], 99.50th=[ 1319], 99.90th=[42730], 99.95th=[42730], 00:18:16.882 | 99.99th=[42730] 00:18:16.882 write: IOPS=663, BW=2653KiB/s (2717kB/s)(2656KiB/1001msec); 0 zone resets 00:18:16.882 slat (nsec): min=9025, max=52205, avg=31280.89, stdev=8020.91 00:18:16.882 clat (usec): min=221, max=955, avg=557.13, stdev=128.23 00:18:16.882 lat (usec): min=231, max=988, avg=588.41, stdev=130.72 00:18:16.882 clat percentiles (usec): 00:18:16.882 | 1.00th=[ 249], 5.00th=[ 343], 10.00th=[ 371], 20.00th=[ 453], 00:18:16.882 | 30.00th=[ 498], 40.00th=[ 529], 50.00th=[ 562], 60.00th=[ 594], 00:18:16.882 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 717], 95.00th=[ 758], 00:18:16.882 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 955], 99.95th=[ 955], 00:18:16.882 | 99.99th=[ 955] 00:18:16.882 bw ( KiB/s): min= 4096, max= 4096, per=33.06%, avg=4096.00, stdev= 0.00, samples=1 00:18:16.882 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:16.882 lat (usec) : 250=0.60%, 500=16.92%, 750=36.22%, 1000=26.02% 00:18:16.882 lat (msec) : 2=20.07%, 50=0.17% 00:18:16.882 cpu : usr=2.40%, sys=4.80%, ctx=1178, majf=0, minf=1 00:18:16.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.882 issued rwts: total=512,664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.882 job3: (groupid=0, jobs=1): err= 0: pid=686807: Mon Jul 15 15:23:26 2024 00:18:16.882 read: IOPS=559, BW=2238KiB/s (2291kB/s)(2240KiB/1001msec) 00:18:16.882 slat (nsec): min=6556, max=61795, avg=23928.61, stdev=5616.01 00:18:16.882 clat (usec): min=450, max=1041, avg=772.78, stdev=137.49 00:18:16.882 lat (usec): min=474, max=1065, avg=796.71, stdev=137.63 00:18:16.882 clat percentiles (usec): 00:18:16.882 | 1.00th=[ 502], 5.00th=[ 537], 10.00th=[ 553], 20.00th=[ 635], 00:18:16.882 | 30.00th=[ 701], 40.00th=[ 750], 50.00th=[ 799], 60.00th=[ 840], 00:18:16.882 | 70.00th=[ 873], 80.00th=[ 898], 90.00th=[ 930], 95.00th=[ 955], 00:18:16.882 | 99.00th=[ 996], 99.50th=[ 1004], 99.90th=[ 1045], 99.95th=[ 1045], 00:18:16.882 | 99.99th=[ 1045] 00:18:16.882 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:18:16.882 slat (nsec): min=9008, max=63474, avg=29232.01, stdev=7199.47 00:18:16.882 clat (usec): min=150, max=758, avg=499.56, stdev=109.35 00:18:16.882 lat (usec): min=172, max=787, avg=528.80, stdev=111.90 00:18:16.882 clat percentiles (usec): 00:18:16.882 | 1.00th=[ 227], 5.00th=[ 330], 10.00th=[ 363], 20.00th=[ 400], 00:18:16.882 | 30.00th=[ 449], 40.00th=[ 478], 50.00th=[ 498], 60.00th=[ 529], 00:18:16.882 | 70.00th=[ 570], 80.00th=[ 603], 90.00th=[ 644], 95.00th=[ 668], 00:18:16.882 | 99.00th=[ 709], 99.50th=[ 725], 99.90th=[ 734], 99.95th=[ 758], 00:18:16.882 | 99.99th=[ 758] 00:18:16.882 bw ( KiB/s): min= 4096, max= 4096, per=33.06%, avg=4096.00, stdev= 0.00, samples=1 00:18:16.882 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:16.882 lat (usec) : 250=1.20%, 500=31.50%, 750=46.40%, 1000=20.64% 00:18:16.882 lat (msec) : 2=0.25% 00:18:16.882 cpu : usr=1.90%, sys=5.00%, ctx=1584, majf=0, minf=1 00:18:16.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:16.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.882 issued rwts: total=560,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:16.882 00:18:16.882 Run status group 0 (all jobs): 00:18:16.882 READ: bw=6213KiB/s (6362kB/s), 73.0KiB/s-2238KiB/s (74.8kB/s-2291kB/s), io=6468KiB (6623kB), run=1001-1041msec 00:18:16.882 WRITE: bw=12.1MiB/s (12.7MB/s), 1967KiB/s-4092KiB/s (2015kB/s-4190kB/s), io=12.6MiB (13.2MB), run=1001-1041msec 00:18:16.882 00:18:16.882 Disk stats (read/write): 00:18:16.883 nvme0n1: ios=562/957, merge=0/0, ticks=451/330, in_queue=781, util=88.58% 00:18:16.883 nvme0n2: ios=64/512, merge=0/0, ticks=648/214, in_queue=862, util=92.67% 00:18:16.883 nvme0n3: ios=543/512, merge=0/0, ticks=727/220, in_queue=947, util=98.74% 00:18:16.883 nvme0n4: ios=547/794, merge=0/0, ticks=432/349, in_queue=781, util=91.20% 00:18:16.883 15:23:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:16.883 [global] 00:18:16.883 thread=1 00:18:16.883 invalidate=1 00:18:16.883 rw=write 00:18:16.883 time_based=1 00:18:16.883 runtime=1 00:18:16.883 ioengine=libaio 00:18:16.883 direct=1 00:18:16.883 bs=4096 00:18:16.883 iodepth=128 00:18:16.883 norandommap=0 00:18:16.883 numjobs=1 00:18:16.883 00:18:16.883 verify_dump=1 00:18:16.883 verify_backlog=512 00:18:16.883 verify_state_save=0 00:18:16.883 do_verify=1 00:18:16.883 verify=crc32c-intel 00:18:16.883 [job0] 00:18:16.883 filename=/dev/nvme0n1 00:18:16.883 [job1] 00:18:16.883 filename=/dev/nvme0n2 00:18:16.883 [job2] 00:18:16.883 filename=/dev/nvme0n3 00:18:16.883 [job3] 00:18:16.883 filename=/dev/nvme0n4 00:18:16.883 Could not set queue depth (nvme0n1) 00:18:16.883 Could not set queue depth (nvme0n2) 00:18:16.883 Could not set queue depth (nvme0n3) 00:18:16.883 Could not set queue depth (nvme0n4) 00:18:17.149 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:17.149 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:17.149 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:17.149 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:17.149 fio-3.35 00:18:17.149 Starting 4 threads 00:18:18.553 00:18:18.553 job0: (groupid=0, jobs=1): err= 0: pid=687306: Mon Jul 15 15:23:27 2024 00:18:18.553 read: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec) 00:18:18.553 slat (nsec): min=897, max=15563k, avg=58378.33, stdev=450718.54 00:18:18.553 clat (usec): min=2337, max=75582, avg=8082.48, stdev=7557.55 00:18:18.554 lat (usec): min=2346, max=75599, avg=8140.86, stdev=7598.36 00:18:18.554 clat percentiles (usec): 00:18:18.554 | 1.00th=[ 2802], 5.00th=[ 3228], 10.00th=[ 4047], 20.00th=[ 5080], 00:18:18.554 | 30.00th=[ 5473], 40.00th=[ 5866], 50.00th=[ 6194], 60.00th=[ 6390], 00:18:18.554 | 70.00th=[ 7046], 80.00th=[ 8225], 90.00th=[14615], 95.00th=[17433], 00:18:18.554 | 99.00th=[39060], 99.50th=[65274], 99.90th=[71828], 99.95th=[71828], 00:18:18.554 | 99.99th=[76022] 00:18:18.554 write: IOPS=7521, BW=29.4MiB/s (30.8MB/s)(29.6MiB/1006msec); 0 zone resets 00:18:18.554 slat (nsec): min=1601, max=16074k, avg=67370.28, stdev=528183.66 00:18:18.554 clat (usec): min=721, max=75217, avg=9748.09, stdev=12726.87 00:18:18.554 lat (usec): min=729, max=75227, avg=9815.46, stdev=12814.67 00:18:18.554 clat percentiles (usec): 00:18:18.554 | 1.00th=[ 1598], 5.00th=[ 2802], 10.00th=[ 3392], 20.00th=[ 3818], 00:18:18.554 | 30.00th=[ 4555], 40.00th=[ 5145], 50.00th=[ 5604], 60.00th=[ 5932], 00:18:18.554 | 70.00th=[ 6652], 80.00th=[ 8455], 90.00th=[19006], 95.00th=[44303], 00:18:18.554 | 99.00th=[65799], 99.50th=[67634], 99.90th=[70779], 99.95th=[70779], 00:18:18.554 | 99.99th=[74974] 00:18:18.554 bw ( KiB/s): min=18584, max=40936, per=42.23%, avg=29760.00, stdev=15805.25, samples=2 00:18:18.554 iops : min= 4646, max=10234, avg=7440.00, stdev=3951.31, samples=2 00:18:18.554 lat (usec) : 750=0.01%, 1000=0.01% 00:18:18.554 lat (msec) : 2=1.03%, 4=15.76%, 10=66.48%, 20=10.25%, 50=4.04% 00:18:18.554 lat (msec) : 100=2.43% 00:18:18.554 cpu : usr=5.57%, sys=7.66%, ctx=534, majf=0, minf=1 00:18:18.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:18.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:18.554 issued rwts: total=6656,7567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:18.554 job1: (groupid=0, jobs=1): err= 0: pid=687308: Mon Jul 15 15:23:27 2024 00:18:18.554 read: IOPS=5011, BW=19.6MiB/s (20.5MB/s)(20.5MiB/1047msec) 00:18:18.554 slat (nsec): min=894, max=60117k, avg=100444.48, stdev=1131454.87 00:18:18.554 clat (usec): min=1812, max=82949, avg=14132.51, stdev=13182.76 00:18:18.554 lat (usec): min=3127, max=82976, avg=14232.96, stdev=13235.85 00:18:18.554 clat percentiles (usec): 00:18:18.554 | 1.00th=[ 3326], 5.00th=[ 4178], 10.00th=[ 4948], 20.00th=[ 6063], 00:18:18.554 | 30.00th=[ 6390], 40.00th=[ 6849], 50.00th=[ 8356], 60.00th=[13698], 00:18:18.554 | 70.00th=[15008], 80.00th=[19268], 90.00th=[24511], 95.00th=[47449], 00:18:18.554 | 99.00th=[76022], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:18:18.554 | 99.99th=[83362] 00:18:18.554 write: IOPS=5379, BW=21.0MiB/s (22.0MB/s)(22.0MiB/1047msec); 0 zone resets 00:18:18.554 slat (nsec): min=1576, max=15858k, avg=78473.82, stdev=602458.88 00:18:18.554 clat (usec): min=928, max=84331, avg=10410.43, stdev=9652.38 00:18:18.554 lat (usec): min=938, max=84359, avg=10488.90, stdev=9715.24 00:18:18.554 clat percentiles (usec): 00:18:18.554 | 1.00th=[ 1663], 5.00th=[ 3163], 10.00th=[ 3654], 20.00th=[ 4686], 00:18:18.554 | 30.00th=[ 5538], 40.00th=[ 5932], 50.00th=[ 6521], 60.00th=[ 7963], 00:18:18.554 | 70.00th=[10683], 80.00th=[12911], 90.00th=[21890], 95.00th=[27919], 00:18:18.554 | 99.00th=[47449], 99.50th=[49021], 99.90th=[84411], 99.95th=[84411], 00:18:18.554 | 99.99th=[84411] 00:18:18.554 bw ( KiB/s): min=16504, max=28544, per=31.96%, avg=22524.00, stdev=8513.57, samples=2 00:18:18.554 iops : min= 4126, max= 7136, avg=5631.00, stdev=2128.39, samples=2 00:18:18.554 lat (usec) : 1000=0.03% 00:18:18.554 lat (msec) : 2=0.85%, 4=7.81%, 10=51.27%, 20=25.48%, 50=12.28% 00:18:18.554 lat (msec) : 100=2.27% 00:18:18.554 cpu : usr=4.40%, sys=5.83%, ctx=474, majf=0, minf=1 00:18:18.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:18.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:18.554 issued rwts: total=5247,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:18.554 job2: (groupid=0, jobs=1): err= 0: pid=687315: Mon Jul 15 15:23:27 2024 00:18:18.554 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:18:18.554 slat (nsec): min=944, max=26544k, avg=175793.25, stdev=1153055.76 00:18:18.554 clat (usec): min=6073, max=54274, avg=21083.41, stdev=8384.05 00:18:18.554 lat (usec): min=6080, max=54361, avg=21259.20, stdev=8489.20 00:18:18.554 clat percentiles (usec): 00:18:18.554 | 1.00th=[ 6718], 5.00th=[13304], 10.00th=[13566], 20.00th=[14091], 00:18:18.554 | 30.00th=[15401], 40.00th=[17433], 50.00th=[19006], 60.00th=[21365], 00:18:18.554 | 70.00th=[23462], 80.00th=[26608], 90.00th=[31589], 95.00th=[38536], 00:18:18.554 | 99.00th=[48497], 99.50th=[53216], 99.90th=[54264], 99.95th=[54264], 00:18:18.554 | 99.99th=[54264] 00:18:18.554 write: IOPS=2679, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1003msec); 0 zone resets 00:18:18.554 slat (nsec): min=1689, max=14784k, avg=185803.27, stdev=921391.43 00:18:18.554 clat (usec): min=558, max=76684, avg=27170.25, stdev=17855.81 00:18:18.554 lat (usec): min=588, max=76694, avg=27356.05, stdev=17955.53 00:18:18.554 clat percentiles (usec): 00:18:18.554 | 1.00th=[ 816], 5.00th=[ 2212], 10.00th=[ 3228], 20.00th=[11076], 00:18:18.554 | 30.00th=[14877], 40.00th=[17171], 50.00th=[22938], 60.00th=[34866], 00:18:18.554 | 70.00th=[41681], 80.00th=[45351], 90.00th=[49021], 95.00th=[54264], 00:18:18.554 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:18:18.554 | 99.99th=[77071] 00:18:18.554 bw ( KiB/s): min=10160, max=10328, per=14.54%, avg=10244.00, stdev=118.79, samples=2 00:18:18.554 iops : min= 2540, max= 2582, avg=2561.00, stdev=29.70, samples=2 00:18:18.554 lat (usec) : 750=0.50%, 1000=0.25% 00:18:18.554 lat (msec) : 2=1.14%, 4=3.73%, 10=4.54%, 20=42.28%, 50=43.01% 00:18:18.554 lat (msec) : 100=4.55% 00:18:18.554 cpu : usr=2.20%, sys=2.99%, ctx=287, majf=0, minf=1 00:18:18.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:18.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:18.554 issued rwts: total=2560,2688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:18.554 job3: (groupid=0, jobs=1): err= 0: pid=687316: Mon Jul 15 15:23:27 2024 00:18:18.554 read: IOPS=2240, BW=8960KiB/s (9175kB/s)(8996KiB/1004msec) 00:18:18.554 slat (nsec): min=1451, max=14345k, avg=173537.60, stdev=1052675.16 00:18:18.554 clat (usec): min=3530, max=43428, avg=20901.67, stdev=5919.54 00:18:18.554 lat (usec): min=3535, max=43455, avg=21075.20, stdev=6004.23 00:18:18.554 clat percentiles (usec): 00:18:18.554 | 1.00th=[11994], 5.00th=[14877], 10.00th=[15139], 20.00th=[15795], 00:18:18.554 | 30.00th=[16909], 40.00th=[17957], 50.00th=[19006], 60.00th=[20841], 00:18:18.554 | 70.00th=[22676], 80.00th=[26346], 90.00th=[30016], 95.00th=[31589], 00:18:18.554 | 99.00th=[36439], 99.50th=[36439], 99.90th=[39060], 99.95th=[41157], 00:18:18.554 | 99.99th=[43254] 00:18:18.554 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:18:18.554 slat (nsec): min=1728, max=31028k, avg=231641.16, stdev=1381323.85 00:18:18.554 clat (usec): min=9217, max=81153, avg=30854.88, stdev=16073.05 00:18:18.554 lat (usec): min=9224, max=81162, avg=31086.52, stdev=16192.65 00:18:18.554 clat percentiles (usec): 00:18:18.554 | 1.00th=[12125], 5.00th=[12780], 10.00th=[12780], 20.00th=[15008], 00:18:18.554 | 30.00th=[18744], 40.00th=[21365], 50.00th=[28181], 60.00th=[34866], 00:18:18.554 | 70.00th=[39060], 80.00th=[42730], 90.00th=[51119], 95.00th=[57934], 00:18:18.554 | 99.00th=[79168], 99.50th=[79168], 99.90th=[81265], 99.95th=[81265], 00:18:18.554 | 99.99th=[81265] 00:18:18.554 bw ( KiB/s): min= 8192, max=12288, per=14.53%, avg=10240.00, stdev=2896.31, samples=2 00:18:18.554 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:18:18.555 lat (msec) : 4=0.27%, 10=0.12%, 20=45.25%, 50=47.97%, 100=6.38% 00:18:18.555 cpu : usr=2.19%, sys=3.49%, ctx=228, majf=0, minf=2 00:18:18.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:18.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:18.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:18.555 issued rwts: total=2249,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:18.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:18.555 00:18:18.555 Run status group 0 (all jobs): 00:18:18.555 READ: bw=62.4MiB/s (65.4MB/s), 8960KiB/s-25.8MiB/s (9175kB/s-27.1MB/s), io=65.3MiB (68.5MB), run=1003-1047msec 00:18:18.555 WRITE: bw=68.8MiB/s (72.2MB/s), 9.96MiB/s-29.4MiB/s (10.4MB/s-30.8MB/s), io=72.1MiB (75.6MB), run=1003-1047msec 00:18:18.555 00:18:18.555 Disk stats (read/write): 00:18:18.555 nvme0n1: ios=6323/7168, merge=0/0, ticks=39759/45992, in_queue=85751, util=81.06% 00:18:18.555 nvme0n2: ios=3826/4096, merge=0/0, ticks=30090/24457, in_queue=54547, util=89.22% 00:18:18.555 nvme0n3: ios=2097/2141, merge=0/0, ticks=38185/56070, in_queue=94255, util=95.18% 00:18:18.555 nvme0n4: ios=2099/2143, merge=0/0, ticks=21203/30173, in_queue=51376, util=96.50% 00:18:18.555 15:23:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:18.555 [global] 00:18:18.555 thread=1 00:18:18.555 invalidate=1 00:18:18.555 rw=randwrite 00:18:18.555 time_based=1 00:18:18.555 runtime=1 00:18:18.555 ioengine=libaio 00:18:18.555 direct=1 00:18:18.555 bs=4096 00:18:18.555 iodepth=128 00:18:18.555 norandommap=0 00:18:18.555 numjobs=1 00:18:18.555 00:18:18.555 verify_dump=1 00:18:18.555 verify_backlog=512 00:18:18.555 verify_state_save=0 00:18:18.555 do_verify=1 00:18:18.555 verify=crc32c-intel 00:18:18.555 [job0] 00:18:18.555 filename=/dev/nvme0n1 00:18:18.555 [job1] 00:18:18.555 filename=/dev/nvme0n2 00:18:18.555 [job2] 00:18:18.555 filename=/dev/nvme0n3 00:18:18.555 [job3] 00:18:18.555 filename=/dev/nvme0n4 00:18:18.555 Could not set queue depth (nvme0n1) 00:18:18.555 Could not set queue depth (nvme0n2) 00:18:18.555 Could not set queue depth (nvme0n3) 00:18:18.555 Could not set queue depth (nvme0n4) 00:18:18.819 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:18.819 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:18.819 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:18.819 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:18.819 fio-3.35 00:18:18.819 Starting 4 threads 00:18:20.228 00:18:20.228 job0: (groupid=0, jobs=1): err= 0: pid=687817: Mon Jul 15 15:23:29 2024 00:18:20.228 read: IOPS=3577, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:18:20.228 slat (nsec): min=855, max=25700k, avg=159286.70, stdev=1125707.96 00:18:20.228 clat (usec): min=1414, max=92199, avg=20919.57, stdev=16448.24 00:18:20.228 lat (usec): min=5498, max=92206, avg=21078.86, stdev=16531.68 00:18:20.228 clat percentiles (usec): 00:18:20.228 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[10814], 20.00th=[11469], 00:18:20.228 | 30.00th=[11994], 40.00th=[12387], 50.00th=[13566], 60.00th=[15926], 00:18:20.228 | 70.00th=[18220], 80.00th=[25297], 90.00th=[50594], 95.00th=[58983], 00:18:20.228 | 99.00th=[81265], 99.50th=[82314], 99.90th=[91751], 99.95th=[91751], 00:18:20.228 | 99.99th=[91751] 00:18:20.228 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:18:20.228 slat (nsec): min=1458, max=15856k, avg=100299.80, stdev=543150.96 00:18:20.228 clat (usec): min=5007, max=40062, avg=12669.65, stdev=5470.16 00:18:20.228 lat (usec): min=5014, max=40069, avg=12769.95, stdev=5496.88 00:18:20.228 clat percentiles (usec): 00:18:20.228 | 1.00th=[ 5604], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 9503], 00:18:20.228 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[11076], 60.00th=[11731], 00:18:20.228 | 70.00th=[13304], 80.00th=[14746], 90.00th=[17171], 95.00th=[20055], 00:18:20.228 | 99.00th=[38011], 99.50th=[38536], 99.90th=[40109], 99.95th=[40109], 00:18:20.228 | 99.99th=[40109] 00:18:20.228 bw ( KiB/s): min=15376, max=16384, per=18.93%, avg=15880.00, stdev=712.76, samples=2 00:18:20.228 iops : min= 3844, max= 4096, avg=3970.00, stdev=178.19, samples=2 00:18:20.228 lat (msec) : 2=0.01%, 10=21.74%, 20=62.74%, 50=10.60%, 100=4.91% 00:18:20.228 cpu : usr=2.50%, sys=3.60%, ctx=503, majf=0, minf=1 00:18:20.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:20.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.228 issued rwts: total=3585,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.228 job1: (groupid=0, jobs=1): err= 0: pid=687819: Mon Jul 15 15:23:29 2024 00:18:20.228 read: IOPS=4147, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1003msec) 00:18:20.228 slat (nsec): min=855, max=23655k, avg=106209.64, stdev=727033.21 00:18:20.228 clat (usec): min=2355, max=48572, avg=11762.62, stdev=3835.83 00:18:20.228 lat (usec): min=4638, max=48578, avg=11868.83, stdev=3906.72 00:18:20.228 clat percentiles (usec): 00:18:20.228 | 1.00th=[ 6783], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[10290], 00:18:20.228 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:18:20.228 | 70.00th=[11863], 80.00th=[12387], 90.00th=[13042], 95.00th=[14877], 00:18:20.228 | 99.00th=[30802], 99.50th=[40109], 99.90th=[48497], 99.95th=[48497], 00:18:20.228 | 99.99th=[48497] 00:18:20.228 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:18:20.228 slat (nsec): min=1455, max=9466.1k, avg=115996.81, stdev=648127.99 00:18:20.228 clat (usec): min=1093, max=106452, avg=16902.88, stdev=18191.39 00:18:20.228 lat (usec): min=1101, max=106461, avg=17018.88, stdev=18293.80 00:18:20.228 clat percentiles (msec): 00:18:20.228 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:18:20.228 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:18:20.228 | 70.00th=[ 14], 80.00th=[ 19], 90.00th=[ 24], 95.00th=[ 63], 00:18:20.228 | 99.00th=[ 97], 99.50th=[ 101], 99.90th=[ 107], 99.95th=[ 107], 00:18:20.228 | 99.99th=[ 107] 00:18:20.228 bw ( KiB/s): min=14832, max=21528, per=21.67%, avg=18180.00, stdev=4734.79, samples=2 00:18:20.228 iops : min= 3708, max= 5382, avg=4545.00, stdev=1183.70, samples=2 00:18:20.228 lat (msec) : 2=0.10%, 4=0.71%, 10=26.92%, 20=61.72%, 50=7.03% 00:18:20.228 lat (msec) : 100=3.28%, 250=0.24% 00:18:20.228 cpu : usr=3.39%, sys=3.29%, ctx=511, majf=0, minf=1 00:18:20.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:20.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.228 issued rwts: total=4160,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.228 job2: (groupid=0, jobs=1): err= 0: pid=687828: Mon Jul 15 15:23:29 2024 00:18:20.228 read: IOPS=4724, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1002msec) 00:18:20.228 slat (nsec): min=943, max=11576k, avg=92444.72, stdev=594607.68 00:18:20.228 clat (usec): min=1085, max=55897, avg=12099.76, stdev=6768.90 00:18:20.228 lat (usec): min=3307, max=55906, avg=12192.20, stdev=6819.40 00:18:20.228 clat percentiles (usec): 00:18:20.228 | 1.00th=[ 5342], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 8848], 00:18:20.228 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[11076], 00:18:20.228 | 70.00th=[11994], 80.00th=[13435], 90.00th=[17695], 95.00th=[19792], 00:18:20.228 | 99.00th=[46924], 99.50th=[53216], 99.90th=[53216], 99.95th=[53740], 00:18:20.228 | 99.99th=[55837] 00:18:20.228 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:18:20.228 slat (nsec): min=1574, max=9502.3k, avg=101658.20, stdev=574986.05 00:18:20.228 clat (usec): min=771, max=51363, avg=13502.31, stdev=8425.93 00:18:20.228 lat (usec): min=845, max=51372, avg=13603.97, stdev=8476.89 00:18:20.228 clat percentiles (usec): 00:18:20.228 | 1.00th=[ 3687], 5.00th=[ 5997], 10.00th=[ 7439], 20.00th=[ 8586], 00:18:20.228 | 30.00th=[ 9241], 40.00th=[10290], 50.00th=[11338], 60.00th=[11863], 00:18:20.228 | 70.00th=[13173], 80.00th=[15795], 90.00th=[21365], 95.00th=[34341], 00:18:20.228 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:18:20.228 | 99.99th=[51119] 00:18:20.228 bw ( KiB/s): min=16384, max=24568, per=24.41%, avg=20476.00, stdev=5786.96, samples=2 00:18:20.228 iops : min= 4096, max= 6142, avg=5119.00, stdev=1446.74, samples=2 00:18:20.228 lat (usec) : 1000=0.07% 00:18:20.228 lat (msec) : 2=0.04%, 4=0.86%, 10=38.33%, 20=52.46%, 50=7.58% 00:18:20.228 lat (msec) : 100=0.66% 00:18:20.228 cpu : usr=3.70%, sys=4.80%, ctx=516, majf=0, minf=1 00:18:20.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:20.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.228 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.228 job3: (groupid=0, jobs=1): err= 0: pid=687832: Mon Jul 15 15:23:29 2024 00:18:20.228 read: IOPS=7111, BW=27.8MiB/s (29.1MB/s)(28.0MiB/1008msec) 00:18:20.228 slat (nsec): min=944, max=7735.3k, avg=73427.71, stdev=494111.32 00:18:20.228 clat (usec): min=3576, max=19449, avg=9365.80, stdev=2359.94 00:18:20.228 lat (usec): min=3584, max=19450, avg=9439.23, stdev=2390.40 00:18:20.228 clat percentiles (usec): 00:18:20.228 | 1.00th=[ 4490], 5.00th=[ 6390], 10.00th=[ 6915], 20.00th=[ 7373], 00:18:20.228 | 30.00th=[ 7767], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9503], 00:18:20.229 | 70.00th=[10159], 80.00th=[11469], 90.00th=[12780], 95.00th=[13435], 00:18:20.229 | 99.00th=[15926], 99.50th=[16909], 99.90th=[19268], 99.95th=[19530], 00:18:20.229 | 99.99th=[19530] 00:18:20.229 write: IOPS=7259, BW=28.4MiB/s (29.7MB/s)(28.6MiB/1008msec); 0 zone resets 00:18:20.229 slat (nsec): min=1606, max=7523.1k, avg=58107.57, stdev=385705.95 00:18:20.229 clat (usec): min=1151, max=22835, avg=8259.26, stdev=2899.31 00:18:20.229 lat (usec): min=1162, max=22837, avg=8317.37, stdev=2917.17 00:18:20.229 clat percentiles (usec): 00:18:20.229 | 1.00th=[ 2835], 5.00th=[ 4080], 10.00th=[ 4686], 20.00th=[ 5866], 00:18:20.229 | 30.00th=[ 6587], 40.00th=[ 7242], 50.00th=[ 7767], 60.00th=[ 8586], 00:18:20.229 | 70.00th=[ 9241], 80.00th=[10290], 90.00th=[12649], 95.00th=[13566], 00:18:20.229 | 99.00th=[14746], 99.50th=[16450], 99.90th=[19530], 99.95th=[22938], 00:18:20.229 | 99.99th=[22938] 00:18:20.229 bw ( KiB/s): min=24776, max=32752, per=34.28%, avg=28764.00, stdev=5639.88, samples=2 00:18:20.229 iops : min= 6194, max= 8188, avg=7191.00, stdev=1409.97, samples=2 00:18:20.229 lat (msec) : 2=0.10%, 4=2.35%, 10=69.48%, 20=28.02%, 50=0.04% 00:18:20.229 cpu : usr=5.26%, sys=7.85%, ctx=558, majf=0, minf=1 00:18:20.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:20.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.229 issued rwts: total=7168,7318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.229 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.229 00:18:20.229 Run status group 0 (all jobs): 00:18:20.229 READ: bw=76.1MiB/s (79.8MB/s), 14.0MiB/s-27.8MiB/s (14.7MB/s-29.1MB/s), io=76.7MiB (80.5MB), run=1002-1008msec 00:18:20.229 WRITE: bw=81.9MiB/s (85.9MB/s), 16.0MiB/s-28.4MiB/s (16.7MB/s-29.7MB/s), io=82.6MiB (86.6MB), run=1002-1008msec 00:18:20.229 00:18:20.229 Disk stats (read/write): 00:18:20.229 nvme0n1: ios=3090/3072, merge=0/0, ticks=16012/10444, in_queue=26456, util=87.98% 00:18:20.229 nvme0n2: ios=4109/4287, merge=0/0, ticks=27983/31491, in_queue=59474, util=86.25% 00:18:20.229 nvme0n3: ios=4002/4096, merge=0/0, ticks=27197/29199, in_queue=56396, util=95.17% 00:18:20.229 nvme0n4: ios=6196/6228, merge=0/0, ticks=45451/43662, in_queue=89113, util=100.00% 00:18:20.229 15:23:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:20.229 15:23:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=688145 00:18:20.229 15:23:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:20.229 15:23:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:20.229 [global] 00:18:20.229 thread=1 00:18:20.229 invalidate=1 00:18:20.229 rw=read 00:18:20.229 time_based=1 00:18:20.229 runtime=10 00:18:20.229 ioengine=libaio 00:18:20.229 direct=1 00:18:20.229 bs=4096 00:18:20.229 iodepth=1 00:18:20.229 norandommap=1 00:18:20.229 numjobs=1 00:18:20.229 00:18:20.229 [job0] 00:18:20.229 filename=/dev/nvme0n1 00:18:20.229 [job1] 00:18:20.229 filename=/dev/nvme0n2 00:18:20.229 [job2] 00:18:20.229 filename=/dev/nvme0n3 00:18:20.229 [job3] 00:18:20.229 filename=/dev/nvme0n4 00:18:20.229 Could not set queue depth (nvme0n1) 00:18:20.229 Could not set queue depth (nvme0n2) 00:18:20.229 Could not set queue depth (nvme0n3) 00:18:20.229 Could not set queue depth (nvme0n4) 00:18:20.492 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:20.492 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:20.492 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:20.492 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:20.492 fio-3.35 00:18:20.492 Starting 4 threads 00:18:23.032 15:23:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:23.292 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=262144, buflen=4096 00:18:23.292 fio: pid=688352, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:23.292 15:23:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:23.292 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=1728512, buflen=4096 00:18:23.292 fio: pid=688346, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:23.292 15:23:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:23.292 15:23:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:23.552 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=16080896, buflen=4096 00:18:23.552 fio: pid=688338, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:23.552 15:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:23.552 15:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:23.552 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=15511552, buflen=4096 00:18:23.552 fio: pid=688339, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:23.812 15:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:23.812 15:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:23.812 00:18:23.812 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=688338: Mon Jul 15 15:23:33 2024 00:18:23.812 read: IOPS=1347, BW=5387KiB/s (5517kB/s)(15.3MiB/2915msec) 00:18:23.812 slat (usec): min=6, max=15204, avg=28.26, stdev=301.79 00:18:23.812 clat (usec): min=180, max=963, avg=708.76, stdev=76.31 00:18:23.812 lat (usec): min=188, max=16043, avg=737.03, stdev=313.89 00:18:23.812 clat percentiles (usec): 00:18:23.812 | 1.00th=[ 478], 5.00th=[ 570], 10.00th=[ 611], 20.00th=[ 652], 00:18:23.812 | 30.00th=[ 685], 40.00th=[ 709], 50.00th=[ 725], 60.00th=[ 742], 00:18:23.812 | 70.00th=[ 750], 80.00th=[ 766], 90.00th=[ 783], 95.00th=[ 807], 00:18:23.812 | 99.00th=[ 873], 99.50th=[ 898], 99.90th=[ 938], 99.95th=[ 955], 00:18:23.812 | 99.99th=[ 963] 00:18:23.812 bw ( KiB/s): min= 5384, max= 5520, per=51.06%, avg=5435.20, stdev=50.79, samples=5 00:18:23.812 iops : min= 1346, max= 1380, avg=1358.80, stdev=12.70, samples=5 00:18:23.812 lat (usec) : 250=0.03%, 500=1.73%, 750=66.74%, 1000=31.47% 00:18:23.812 cpu : usr=1.17%, sys=3.57%, ctx=3929, majf=0, minf=1 00:18:23.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:23.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.812 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.812 issued rwts: total=3927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:23.812 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=688339: Mon Jul 15 15:23:33 2024 00:18:23.812 read: IOPS=1229, BW=4917KiB/s (5035kB/s)(14.8MiB/3081msec) 00:18:23.812 slat (usec): min=6, max=21480, avg=42.07, stdev=550.66 00:18:23.812 clat (usec): min=190, max=6823, avg=765.83, stdev=167.27 00:18:23.812 lat (usec): min=197, max=22286, avg=807.90, stdev=576.39 00:18:23.812 clat percentiles (usec): 00:18:23.812 | 1.00th=[ 412], 5.00th=[ 594], 10.00th=[ 644], 20.00th=[ 693], 00:18:23.812 | 30.00th=[ 725], 40.00th=[ 750], 50.00th=[ 775], 60.00th=[ 799], 00:18:23.812 | 70.00th=[ 816], 80.00th=[ 840], 90.00th=[ 873], 95.00th=[ 898], 00:18:23.812 | 99.00th=[ 963], 99.50th=[ 1029], 99.90th=[ 1221], 99.95th=[ 6259], 00:18:23.812 | 99.99th=[ 6849] 00:18:23.812 bw ( KiB/s): min= 4952, max= 5048, per=46.88%, avg=4990.40, stdev=35.51, samples=5 00:18:23.812 iops : min= 1238, max= 1262, avg=1247.60, stdev= 8.88, samples=5 00:18:23.812 lat (usec) : 250=0.08%, 500=1.64%, 750=37.72%, 1000=59.85% 00:18:23.812 lat (msec) : 2=0.63%, 10=0.05% 00:18:23.812 cpu : usr=1.46%, sys=2.79%, ctx=3796, majf=0, minf=1 00:18:23.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:23.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.812 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.812 issued rwts: total=3788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:23.812 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=688346: Mon Jul 15 15:23:33 2024 00:18:23.812 read: IOPS=153, BW=611KiB/s (626kB/s)(1688KiB/2761msec) 00:18:23.812 slat (usec): min=6, max=116, avg=24.29, stdev= 7.11 00:18:23.812 clat (usec): min=634, max=43054, avg=6509.82, stdev=13891.23 00:18:23.812 lat (usec): min=659, max=43081, avg=6534.15, stdev=13892.33 00:18:23.812 clat percentiles (usec): 00:18:23.812 | 1.00th=[ 816], 5.00th=[ 938], 10.00th=[ 988], 20.00th=[ 1029], 00:18:23.812 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1139], 00:18:23.812 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[41681], 95.00th=[42206], 00:18:23.812 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:18:23.812 | 99.99th=[43254] 00:18:23.812 bw ( KiB/s): min= 96, max= 2304, per=6.24%, avg=664.00, stdev=955.46, samples=5 00:18:23.812 iops : min= 24, max= 576, avg=166.00, stdev=238.86, samples=5 00:18:23.812 lat (usec) : 750=0.71%, 1000=12.77% 00:18:23.812 lat (msec) : 2=73.05%, 50=13.24% 00:18:23.812 cpu : usr=0.11%, sys=0.51%, ctx=424, majf=0, minf=1 00:18:23.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:23.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.812 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.812 issued rwts: total=423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:23.812 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=688352: Mon Jul 15 15:23:33 2024 00:18:23.812 read: IOPS=24, BW=97.9KiB/s (100kB/s)(256KiB/2616msec) 00:18:23.812 slat (nsec): min=25413, max=40933, avg=26308.68, stdev=1950.57 00:18:23.812 clat (usec): min=1078, max=43953, avg=40825.26, stdev=7209.05 00:18:23.812 lat (usec): min=1118, max=43984, avg=40851.57, stdev=7207.74 00:18:23.812 clat percentiles (usec): 00:18:23.812 | 1.00th=[ 1074], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:18:23.812 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:23.812 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:18:23.812 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:18:23.812 | 99.99th=[43779] 00:18:23.812 bw ( KiB/s): min= 96, max= 104, per=0.91%, avg=97.60, stdev= 3.58, samples=5 00:18:23.812 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:18:23.812 lat (msec) : 2=3.08%, 50=95.38% 00:18:23.812 cpu : usr=0.15%, sys=0.00%, ctx=65, majf=0, minf=2 00:18:23.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:23.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.812 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.812 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:23.812 00:18:23.812 Run status group 0 (all jobs): 00:18:23.812 READ: bw=10.4MiB/s (10.9MB/s), 97.9KiB/s-5387KiB/s (100kB/s-5517kB/s), io=32.0MiB (33.6MB), run=2616-3081msec 00:18:23.812 00:18:23.812 Disk stats (read/write): 00:18:23.812 nvme0n1: ios=3829/0, merge=0/0, ticks=2654/0, in_queue=2654, util=94.13% 00:18:23.812 nvme0n2: ios=3541/0, merge=0/0, ticks=2672/0, in_queue=2672, util=94.30% 00:18:23.812 nvme0n3: ios=418/0, merge=0/0, ticks=2558/0, in_queue=2558, util=96.08% 00:18:23.812 nvme0n4: ios=63/0, merge=0/0, ticks=2572/0, in_queue=2572, util=96.43% 00:18:23.812 15:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:23.812 15:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:24.072 15:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:24.072 15:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:24.072 15:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:24.072 15:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:24.332 15:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:24.332 15:23:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:24.592 15:23:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:24.592 15:23:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 688145 00:18:24.592 15:23:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:24.592 15:23:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:24.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.592 15:23:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:24.592 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:18:24.592 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:24.592 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:24.592 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:24.592 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:24.592 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:18:24.592 15:23:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:24.592 15:23:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:24.592 nvmf hotplug test: fio failed as expected 00:18:24.592 15:23:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:24.852 rmmod nvme_tcp 00:18:24.852 rmmod nvme_fabrics 00:18:24.852 rmmod nvme_keyring 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 684652 ']' 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 684652 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 684652 ']' 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 684652 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 684652 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 684652' 00:18:24.852 killing process with pid 684652 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 684652 00:18:24.852 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 684652 00:18:25.153 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:25.153 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:25.153 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:25.153 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:25.153 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:25.153 15:23:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.153 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.153 15:23:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.064 15:23:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:27.064 00:18:27.064 real 0m28.677s 00:18:27.064 user 2m27.724s 00:18:27.064 sys 0m9.444s 00:18:27.064 15:23:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:27.064 15:23:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.064 ************************************ 00:18:27.064 END TEST nvmf_fio_target 00:18:27.064 ************************************ 00:18:27.064 15:23:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:27.064 15:23:36 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:27.064 15:23:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:27.064 15:23:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:27.064 15:23:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:27.064 ************************************ 00:18:27.064 START TEST nvmf_bdevio 00:18:27.064 ************************************ 00:18:27.064 15:23:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:27.324 * Looking for test storage... 00:18:27.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.324 15:23:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:27.325 15:23:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:35.466 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:35.466 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:35.466 Found net devices under 0000:31:00.0: cvl_0_0 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:35.466 Found net devices under 0000:31:00.1: cvl_0_1 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.466 15:23:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:35.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:18:35.466 00:18:35.466 --- 10.0.0.2 ping statistics --- 00:18:35.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.466 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:35.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:18:35.466 00:18:35.466 --- 10.0.0.1 ping statistics --- 00:18:35.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.466 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:35.466 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.467 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:35.467 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:35.467 15:23:44 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:35.467 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:35.467 15:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:35.467 15:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:35.467 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=693735 00:18:35.467 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 693735 00:18:35.467 15:23:44 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:35.467 15:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 693735 ']' 00:18:35.467 15:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.467 15:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:35.467 15:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.467 15:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:35.467 15:23:44 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:35.467 [2024-07-15 15:23:44.375578] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:18:35.467 [2024-07-15 15:23:44.375638] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.467 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.467 [2024-07-15 15:23:44.469604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:35.467 [2024-07-15 15:23:44.561630] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.467 [2024-07-15 15:23:44.561684] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.467 [2024-07-15 15:23:44.561693] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.467 [2024-07-15 15:23:44.561699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.467 [2024-07-15 15:23:44.561705] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.467 [2024-07-15 15:23:44.561877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:35.467 [2024-07-15 15:23:44.562048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:35.467 [2024-07-15 15:23:44.562356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:35.467 [2024-07-15 15:23:44.562357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:35.732 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:35.732 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:18:35.732 15:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.732 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:35.733 [2024-07-15 15:23:45.225302] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:35.733 Malloc0 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:35.733 [2024-07-15 15:23:45.290474] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:35.733 { 00:18:35.733 "params": { 00:18:35.733 "name": "Nvme$subsystem", 00:18:35.733 "trtype": "$TEST_TRANSPORT", 00:18:35.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:35.733 "adrfam": "ipv4", 00:18:35.733 "trsvcid": "$NVMF_PORT", 00:18:35.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:35.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:35.733 "hdgst": ${hdgst:-false}, 00:18:35.733 "ddgst": ${ddgst:-false} 00:18:35.733 }, 00:18:35.733 "method": "bdev_nvme_attach_controller" 00:18:35.733 } 00:18:35.733 EOF 00:18:35.733 )") 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:35.733 15:23:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:35.733 "params": { 00:18:35.733 "name": "Nvme1", 00:18:35.733 "trtype": "tcp", 00:18:35.733 "traddr": "10.0.0.2", 00:18:35.733 "adrfam": "ipv4", 00:18:35.733 "trsvcid": "4420", 00:18:35.733 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.733 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:35.733 "hdgst": false, 00:18:35.733 "ddgst": false 00:18:35.733 }, 00:18:35.733 "method": "bdev_nvme_attach_controller" 00:18:35.733 }' 00:18:35.994 [2024-07-15 15:23:45.354696] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:18:35.994 [2024-07-15 15:23:45.354762] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693949 ] 00:18:35.994 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.994 [2024-07-15 15:23:45.426169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:35.994 [2024-07-15 15:23:45.502069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.994 [2024-07-15 15:23:45.502190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.994 [2024-07-15 15:23:45.502193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.254 I/O targets: 00:18:36.254 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:36.254 00:18:36.254 00:18:36.254 CUnit - A unit testing framework for C - Version 2.1-3 00:18:36.254 http://cunit.sourceforge.net/ 00:18:36.254 00:18:36.254 00:18:36.254 Suite: bdevio tests on: Nvme1n1 00:18:36.254 Test: blockdev write read block ...passed 00:18:36.254 Test: blockdev write zeroes read block ...passed 00:18:36.254 Test: blockdev write zeroes read no split ...passed 00:18:36.515 Test: blockdev write zeroes read split ...passed 00:18:36.515 Test: blockdev write zeroes read split partial ...passed 00:18:36.515 Test: blockdev reset ...[2024-07-15 15:23:45.917815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:36.515 [2024-07-15 15:23:45.917876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d7c50 (9): Bad file descriptor 00:18:36.515 [2024-07-15 15:23:46.060846] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:36.515 passed 00:18:36.515 Test: blockdev write read 8 blocks ...passed 00:18:36.515 Test: blockdev write read size > 128k ...passed 00:18:36.515 Test: blockdev write read invalid size ...passed 00:18:36.776 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:36.776 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:36.776 Test: blockdev write read max offset ...passed 00:18:36.776 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:36.776 Test: blockdev writev readv 8 blocks ...passed 00:18:36.776 Test: blockdev writev readv 30 x 1block ...passed 00:18:36.776 Test: blockdev writev readv block ...passed 00:18:36.776 Test: blockdev writev readv size > 128k ...passed 00:18:36.776 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:36.776 Test: blockdev comparev and writev ...[2024-07-15 15:23:46.362099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.776 [2024-07-15 15:23:46.362124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.776 [2024-07-15 15:23:46.362134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.776 [2024-07-15 15:23:46.362140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:36.776 [2024-07-15 15:23:46.362517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.776 [2024-07-15 15:23:46.362525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:36.776 [2024-07-15 15:23:46.362535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.776 [2024-07-15 15:23:46.362541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:36.776 [2024-07-15 15:23:46.362866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.776 [2024-07-15 15:23:46.362874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:36.776 [2024-07-15 15:23:46.362887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.776 [2024-07-15 15:23:46.362893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:36.776 [2024-07-15 15:23:46.363264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.776 [2024-07-15 15:23:46.363272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:36.776 [2024-07-15 15:23:46.363281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.776 [2024-07-15 15:23:46.363287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:37.037 passed 00:18:37.037 Test: blockdev nvme passthru rw ...passed 00:18:37.037 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:23:46.447389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:37.037 [2024-07-15 15:23:46.447401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:37.037 [2024-07-15 15:23:46.447639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:37.037 [2024-07-15 15:23:46.447647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:37.037 [2024-07-15 15:23:46.447870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:37.037 [2024-07-15 15:23:46.447878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:37.037 [2024-07-15 15:23:46.448109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:37.037 [2024-07-15 15:23:46.448118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:37.037 passed 00:18:37.037 Test: blockdev nvme admin passthru ...passed 00:18:37.037 Test: blockdev copy ...passed 00:18:37.037 00:18:37.037 Run Summary: Type Total Ran Passed Failed Inactive 00:18:37.037 suites 1 1 n/a 0 0 00:18:37.037 tests 23 23 23 0 0 00:18:37.037 asserts 152 152 152 0 n/a 00:18:37.037 00:18:37.037 Elapsed time = 1.475 seconds 00:18:37.037 15:23:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:37.037 15:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.037 15:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:37.037 15:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.037 15:23:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:37.037 15:23:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:37.037 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:37.037 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:37.037 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:37.037 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:37.037 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:37.037 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:37.037 rmmod nvme_tcp 00:18:37.037 rmmod nvme_fabrics 00:18:37.299 rmmod nvme_keyring 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 693735 ']' 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 693735 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 693735 ']' 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 693735 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 693735 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 693735' 00:18:37.299 killing process with pid 693735 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 693735 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 693735 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.299 15:23:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.845 15:23:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:39.845 00:18:39.845 real 0m12.268s 00:18:39.845 user 0m14.052s 00:18:39.845 sys 0m6.075s 00:18:39.845 15:23:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:39.845 15:23:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:39.845 ************************************ 00:18:39.845 END TEST nvmf_bdevio 00:18:39.845 ************************************ 00:18:39.845 15:23:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:39.845 15:23:48 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:39.845 15:23:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:39.845 15:23:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:39.845 15:23:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:39.845 ************************************ 00:18:39.845 START TEST nvmf_auth_target 00:18:39.845 ************************************ 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:39.845 * Looking for test storage... 00:18:39.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.845 15:23:49 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:39.846 15:23:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:47.984 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:47.984 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:47.984 Found net devices under 0000:31:00.0: cvl_0_0 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:47.984 Found net devices under 0000:31:00.1: cvl_0_1 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:47.984 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:47.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.581 ms 00:18:47.985 00:18:47.985 --- 10.0.0.2 ping statistics --- 00:18:47.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.985 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:18:47.985 00:18:47.985 --- 10.0.0.1 ping statistics --- 00:18:47.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.985 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=698821 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 698821 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 698821 ']' 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:47.985 15:23:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=698855 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2d0df65f6c3bcf2da86dd5e8d760721a9ddcda03fe1c487e 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1iM 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2d0df65f6c3bcf2da86dd5e8d760721a9ddcda03fe1c487e 0 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2d0df65f6c3bcf2da86dd5e8d760721a9ddcda03fe1c487e 0 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2d0df65f6c3bcf2da86dd5e8d760721a9ddcda03fe1c487e 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1iM 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1iM 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.1iM 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9520feea171ff57078ab4104c87d96743e8dfa1c6b44969f63b6bc46be5bef66 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Xhr 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9520feea171ff57078ab4104c87d96743e8dfa1c6b44969f63b6bc46be5bef66 3 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9520feea171ff57078ab4104c87d96743e8dfa1c6b44969f63b6bc46be5bef66 3 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9520feea171ff57078ab4104c87d96743e8dfa1c6b44969f63b6bc46be5bef66 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Xhr 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Xhr 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.Xhr 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9a0f51ce438b2b97e23e71c74836cf69 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.S5X 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9a0f51ce438b2b97e23e71c74836cf69 1 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9a0f51ce438b2b97e23e71c74836cf69 1 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9a0f51ce438b2b97e23e71c74836cf69 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:48.245 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.S5X 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.S5X 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.S5X 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=af8402eac2903592df8beb0459b88d483fb4c279ce0e274a 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fgy 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key af8402eac2903592df8beb0459b88d483fb4c279ce0e274a 2 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 af8402eac2903592df8beb0459b88d483fb4c279ce0e274a 2 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=af8402eac2903592df8beb0459b88d483fb4c279ce0e274a 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fgy 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fgy 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.fgy 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e38941c024354cd5931d79e041fbbbd9a9b26e0e352b194d 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Xdf 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e38941c024354cd5931d79e041fbbbd9a9b26e0e352b194d 2 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e38941c024354cd5931d79e041fbbbd9a9b26e0e352b194d 2 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e38941c024354cd5931d79e041fbbbd9a9b26e0e352b194d 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:48.506 15:23:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Xdf 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Xdf 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Xdf 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=65ef230bc2e6e7eb9d45b04839649b15 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ckV 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 65ef230bc2e6e7eb9d45b04839649b15 1 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 65ef230bc2e6e7eb9d45b04839649b15 1 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=65ef230bc2e6e7eb9d45b04839649b15 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ckV 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ckV 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.ckV 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:48.506 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:48.507 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.507 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:48.507 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:48.507 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:48.507 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:48.507 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0c09d565e97a00cb679bea600fb770efd0dea9d0c28d4dfeebfae41adfd2ee43 00:18:48.507 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:48.507 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.w86 00:18:48.507 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0c09d565e97a00cb679bea600fb770efd0dea9d0c28d4dfeebfae41adfd2ee43 3 00:18:48.507 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0c09d565e97a00cb679bea600fb770efd0dea9d0c28d4dfeebfae41adfd2ee43 3 00:18:48.507 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.507 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:48.507 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0c09d565e97a00cb679bea600fb770efd0dea9d0c28d4dfeebfae41adfd2ee43 00:18:48.507 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:48.507 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:48.507 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.w86 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.w86 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.w86 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 698821 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 698821 ']' 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 698855 /var/tmp/host.sock 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 698855 ']' 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:48.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.767 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1iM 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1iM 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1iM 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.Xhr ]] 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Xhr 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Xhr 00:18:49.034 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Xhr 00:18:49.294 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:49.294 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.S5X 00:18:49.294 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.294 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.294 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.294 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.S5X 00:18:49.294 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.S5X 00:18:49.553 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.fgy ]] 00:18:49.553 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fgy 00:18:49.553 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.553 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.553 15:23:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.553 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fgy 00:18:49.553 15:23:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fgy 00:18:49.553 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:49.553 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Xdf 00:18:49.553 15:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.553 15:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.553 15:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.553 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Xdf 00:18:49.553 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Xdf 00:18:49.812 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.ckV ]] 00:18:49.812 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ckV 00:18:49.812 15:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.812 15:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.812 15:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.812 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ckV 00:18:49.812 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ckV 00:18:50.072 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:50.072 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.w86 00:18:50.072 15:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.072 15:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.072 15:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.072 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.w86 00:18:50.072 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.w86 00:18:50.072 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:50.072 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:50.072 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.072 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.072 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:50.072 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:50.332 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:50.332 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.332 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.332 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:50.332 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:50.332 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.332 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.332 15:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.332 15:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.332 15:23:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.332 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.332 15:23:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.592 00:18:50.592 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.592 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.592 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.853 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.853 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.853 15:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.853 15:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.853 15:24:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.853 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.853 { 00:18:50.853 "cntlid": 1, 00:18:50.853 "qid": 0, 00:18:50.853 "state": "enabled", 00:18:50.853 "thread": "nvmf_tgt_poll_group_000", 00:18:50.853 "listen_address": { 00:18:50.853 "trtype": "TCP", 00:18:50.853 "adrfam": "IPv4", 00:18:50.853 "traddr": "10.0.0.2", 00:18:50.853 "trsvcid": "4420" 00:18:50.853 }, 00:18:50.853 "peer_address": { 00:18:50.853 "trtype": "TCP", 00:18:50.853 "adrfam": "IPv4", 00:18:50.853 "traddr": "10.0.0.1", 00:18:50.853 "trsvcid": "50110" 00:18:50.853 }, 00:18:50.853 "auth": { 00:18:50.853 "state": "completed", 00:18:50.853 "digest": "sha256", 00:18:50.853 "dhgroup": "null" 00:18:50.853 } 00:18:50.853 } 00:18:50.853 ]' 00:18:50.853 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.853 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.853 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.853 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:50.853 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.853 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.853 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.853 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.113 15:24:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:18:51.683 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.683 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:51.683 15:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.683 15:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.683 15:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.683 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.683 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:51.683 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:51.943 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:51.943 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.943 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.943 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:51.943 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:51.943 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.943 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.943 15:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.943 15:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.943 15:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.943 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.943 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.203 00:18:52.203 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.203 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.203 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.203 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.203 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.203 15:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.203 15:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.203 15:24:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.203 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.203 { 00:18:52.203 "cntlid": 3, 00:18:52.203 "qid": 0, 00:18:52.203 "state": "enabled", 00:18:52.203 "thread": "nvmf_tgt_poll_group_000", 00:18:52.203 "listen_address": { 00:18:52.203 "trtype": "TCP", 00:18:52.203 "adrfam": "IPv4", 00:18:52.203 "traddr": "10.0.0.2", 00:18:52.203 "trsvcid": "4420" 00:18:52.203 }, 00:18:52.203 "peer_address": { 00:18:52.203 "trtype": "TCP", 00:18:52.203 "adrfam": "IPv4", 00:18:52.203 "traddr": "10.0.0.1", 00:18:52.203 "trsvcid": "50142" 00:18:52.203 }, 00:18:52.203 "auth": { 00:18:52.203 "state": "completed", 00:18:52.203 "digest": "sha256", 00:18:52.203 "dhgroup": "null" 00:18:52.203 } 00:18:52.203 } 00:18:52.203 ]' 00:18:52.463 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.463 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.463 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.463 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:52.463 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.463 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.463 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.463 15:24:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.723 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:18:53.292 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.292 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:53.292 15:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.292 15:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.292 15:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.292 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.292 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:53.292 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:53.552 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:53.552 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.552 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.552 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:53.552 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:53.552 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.552 15:24:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.552 15:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.552 15:24:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.552 15:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.552 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.552 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.815 00:18:53.815 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.815 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.815 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.815 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.815 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.815 15:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.815 15:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.815 15:24:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.815 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.815 { 00:18:53.815 "cntlid": 5, 00:18:53.815 "qid": 0, 00:18:53.815 "state": "enabled", 00:18:53.815 "thread": "nvmf_tgt_poll_group_000", 00:18:53.815 "listen_address": { 00:18:53.815 "trtype": "TCP", 00:18:53.815 "adrfam": "IPv4", 00:18:53.815 "traddr": "10.0.0.2", 00:18:53.815 "trsvcid": "4420" 00:18:53.815 }, 00:18:53.815 "peer_address": { 00:18:53.815 "trtype": "TCP", 00:18:53.815 "adrfam": "IPv4", 00:18:53.815 "traddr": "10.0.0.1", 00:18:53.815 "trsvcid": "50178" 00:18:53.815 }, 00:18:53.815 "auth": { 00:18:53.815 "state": "completed", 00:18:53.815 "digest": "sha256", 00:18:53.815 "dhgroup": "null" 00:18:53.815 } 00:18:53.815 } 00:18:53.815 ]' 00:18:53.815 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.074 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.074 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.074 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:54.074 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.074 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.074 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.074 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.333 15:24:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:18:54.903 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.903 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:54.903 15:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.903 15:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.903 15:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.903 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.903 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:54.903 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:55.163 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:55.163 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.163 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.163 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:55.163 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:55.163 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.163 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:18:55.163 15:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.163 15:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.163 15:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.163 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.163 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.423 00:18:55.423 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.423 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.423 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.423 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.423 15:24:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.423 15:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.423 15:24:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.423 15:24:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.423 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.423 { 00:18:55.423 "cntlid": 7, 00:18:55.423 "qid": 0, 00:18:55.423 "state": "enabled", 00:18:55.423 "thread": "nvmf_tgt_poll_group_000", 00:18:55.423 "listen_address": { 00:18:55.423 "trtype": "TCP", 00:18:55.423 "adrfam": "IPv4", 00:18:55.423 "traddr": "10.0.0.2", 00:18:55.423 "trsvcid": "4420" 00:18:55.423 }, 00:18:55.423 "peer_address": { 00:18:55.423 "trtype": "TCP", 00:18:55.423 "adrfam": "IPv4", 00:18:55.423 "traddr": "10.0.0.1", 00:18:55.423 "trsvcid": "50204" 00:18:55.423 }, 00:18:55.423 "auth": { 00:18:55.423 "state": "completed", 00:18:55.423 "digest": "sha256", 00:18:55.423 "dhgroup": "null" 00:18:55.423 } 00:18:55.423 } 00:18:55.423 ]' 00:18:55.423 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.684 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:55.684 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.684 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:55.684 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.684 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.684 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.684 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.684 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:18:56.626 15:24:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.626 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.887 00:18:56.887 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.887 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.887 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.147 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.147 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.147 15:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.147 15:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.147 15:24:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.147 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.147 { 00:18:57.147 "cntlid": 9, 00:18:57.147 "qid": 0, 00:18:57.147 "state": "enabled", 00:18:57.147 "thread": "nvmf_tgt_poll_group_000", 00:18:57.147 "listen_address": { 00:18:57.147 "trtype": "TCP", 00:18:57.147 "adrfam": "IPv4", 00:18:57.147 "traddr": "10.0.0.2", 00:18:57.147 "trsvcid": "4420" 00:18:57.147 }, 00:18:57.147 "peer_address": { 00:18:57.147 "trtype": "TCP", 00:18:57.147 "adrfam": "IPv4", 00:18:57.147 "traddr": "10.0.0.1", 00:18:57.147 "trsvcid": "37424" 00:18:57.148 }, 00:18:57.148 "auth": { 00:18:57.148 "state": "completed", 00:18:57.148 "digest": "sha256", 00:18:57.148 "dhgroup": "ffdhe2048" 00:18:57.148 } 00:18:57.148 } 00:18:57.148 ]' 00:18:57.148 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.148 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.148 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.148 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:57.148 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.148 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.148 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.148 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.407 15:24:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:18:58.346 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.346 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:58.346 15:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.346 15:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.346 15:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.346 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.346 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:58.347 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:58.347 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:58.347 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.347 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:58.347 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:58.347 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:58.347 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.347 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.347 15:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.347 15:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.347 15:24:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.347 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.347 15:24:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.607 00:18:58.607 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.607 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.607 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.607 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.607 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.607 15:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.607 15:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.607 15:24:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.607 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.607 { 00:18:58.607 "cntlid": 11, 00:18:58.607 "qid": 0, 00:18:58.607 "state": "enabled", 00:18:58.607 "thread": "nvmf_tgt_poll_group_000", 00:18:58.607 "listen_address": { 00:18:58.607 "trtype": "TCP", 00:18:58.607 "adrfam": "IPv4", 00:18:58.607 "traddr": "10.0.0.2", 00:18:58.607 "trsvcid": "4420" 00:18:58.607 }, 00:18:58.607 "peer_address": { 00:18:58.607 "trtype": "TCP", 00:18:58.607 "adrfam": "IPv4", 00:18:58.607 "traddr": "10.0.0.1", 00:18:58.607 "trsvcid": "37444" 00:18:58.607 }, 00:18:58.607 "auth": { 00:18:58.607 "state": "completed", 00:18:58.607 "digest": "sha256", 00:18:58.607 "dhgroup": "ffdhe2048" 00:18:58.607 } 00:18:58.607 } 00:18:58.607 ]' 00:18:58.607 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.868 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.868 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.868 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:58.868 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.868 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.868 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.868 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.128 15:24:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:18:59.700 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.700 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:18:59.700 15:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.700 15:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.700 15:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.700 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.700 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:59.700 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:59.962 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:59.962 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.962 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.962 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:59.962 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:59.962 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.962 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.962 15:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.962 15:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.962 15:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.962 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.962 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.223 00:19:00.223 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.223 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.223 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.223 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.223 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.223 15:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.223 15:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.223 15:24:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.223 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.223 { 00:19:00.223 "cntlid": 13, 00:19:00.223 "qid": 0, 00:19:00.223 "state": "enabled", 00:19:00.223 "thread": "nvmf_tgt_poll_group_000", 00:19:00.223 "listen_address": { 00:19:00.223 "trtype": "TCP", 00:19:00.223 "adrfam": "IPv4", 00:19:00.223 "traddr": "10.0.0.2", 00:19:00.223 "trsvcid": "4420" 00:19:00.223 }, 00:19:00.223 "peer_address": { 00:19:00.223 "trtype": "TCP", 00:19:00.223 "adrfam": "IPv4", 00:19:00.223 "traddr": "10.0.0.1", 00:19:00.223 "trsvcid": "37480" 00:19:00.223 }, 00:19:00.223 "auth": { 00:19:00.223 "state": "completed", 00:19:00.223 "digest": "sha256", 00:19:00.223 "dhgroup": "ffdhe2048" 00:19:00.223 } 00:19:00.223 } 00:19:00.223 ]' 00:19:00.224 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.485 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.485 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.485 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:00.485 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.485 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.485 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.485 15:24:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.746 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:19:01.318 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.318 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:01.318 15:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.318 15:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.318 15:24:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.318 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.318 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:01.318 15:24:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:01.580 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:01.580 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.580 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.580 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:01.580 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:01.580 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.580 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:19:01.580 15:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.580 15:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.580 15:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.580 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.580 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.841 00:19:01.841 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.841 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.841 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.841 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.841 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.841 15:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.841 15:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.841 15:24:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.841 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.841 { 00:19:01.841 "cntlid": 15, 00:19:01.841 "qid": 0, 00:19:01.841 "state": "enabled", 00:19:01.841 "thread": "nvmf_tgt_poll_group_000", 00:19:01.841 "listen_address": { 00:19:01.841 "trtype": "TCP", 00:19:01.841 "adrfam": "IPv4", 00:19:01.841 "traddr": "10.0.0.2", 00:19:01.841 "trsvcid": "4420" 00:19:01.841 }, 00:19:01.841 "peer_address": { 00:19:01.841 "trtype": "TCP", 00:19:01.841 "adrfam": "IPv4", 00:19:01.841 "traddr": "10.0.0.1", 00:19:01.841 "trsvcid": "37508" 00:19:01.841 }, 00:19:01.841 "auth": { 00:19:01.841 "state": "completed", 00:19:01.841 "digest": "sha256", 00:19:01.841 "dhgroup": "ffdhe2048" 00:19:01.841 } 00:19:01.841 } 00:19:01.841 ]' 00:19:01.841 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.841 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.841 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.102 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.102 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.102 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.102 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.102 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.102 15:24:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:19:03.042 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.042 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:03.042 15:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.042 15:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.042 15:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.042 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.042 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.042 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:03.043 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:03.043 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:03.043 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.043 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:03.043 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:03.043 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:03.043 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.043 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.043 15:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.043 15:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.043 15:24:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.043 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.043 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.302 00:19:03.302 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.302 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.302 15:24:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.562 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.562 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.562 15:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.562 15:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.562 15:24:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.562 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.562 { 00:19:03.562 "cntlid": 17, 00:19:03.562 "qid": 0, 00:19:03.562 "state": "enabled", 00:19:03.562 "thread": "nvmf_tgt_poll_group_000", 00:19:03.562 "listen_address": { 00:19:03.562 "trtype": "TCP", 00:19:03.562 "adrfam": "IPv4", 00:19:03.562 "traddr": "10.0.0.2", 00:19:03.562 "trsvcid": "4420" 00:19:03.562 }, 00:19:03.562 "peer_address": { 00:19:03.562 "trtype": "TCP", 00:19:03.562 "adrfam": "IPv4", 00:19:03.562 "traddr": "10.0.0.1", 00:19:03.562 "trsvcid": "37534" 00:19:03.562 }, 00:19:03.562 "auth": { 00:19:03.562 "state": "completed", 00:19:03.562 "digest": "sha256", 00:19:03.562 "dhgroup": "ffdhe3072" 00:19:03.562 } 00:19:03.562 } 00:19:03.562 ]' 00:19:03.562 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.562 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.562 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.562 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:03.562 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.562 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.562 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.562 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.822 15:24:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.764 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.024 00:19:05.024 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.024 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.024 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.286 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.286 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.286 15:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.286 15:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.286 15:24:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.286 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.286 { 00:19:05.286 "cntlid": 19, 00:19:05.286 "qid": 0, 00:19:05.286 "state": "enabled", 00:19:05.286 "thread": "nvmf_tgt_poll_group_000", 00:19:05.286 "listen_address": { 00:19:05.286 "trtype": "TCP", 00:19:05.286 "adrfam": "IPv4", 00:19:05.286 "traddr": "10.0.0.2", 00:19:05.286 "trsvcid": "4420" 00:19:05.286 }, 00:19:05.286 "peer_address": { 00:19:05.286 "trtype": "TCP", 00:19:05.286 "adrfam": "IPv4", 00:19:05.286 "traddr": "10.0.0.1", 00:19:05.286 "trsvcid": "37566" 00:19:05.286 }, 00:19:05.286 "auth": { 00:19:05.286 "state": "completed", 00:19:05.286 "digest": "sha256", 00:19:05.286 "dhgroup": "ffdhe3072" 00:19:05.286 } 00:19:05.286 } 00:19:05.286 ]' 00:19:05.286 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.286 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.286 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.286 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:05.286 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.286 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.286 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.286 15:24:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.547 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:19:06.119 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.380 15:24:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.641 00:19:06.641 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.641 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.641 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.901 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.901 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.901 15:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.901 15:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.901 15:24:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.901 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.901 { 00:19:06.901 "cntlid": 21, 00:19:06.901 "qid": 0, 00:19:06.901 "state": "enabled", 00:19:06.901 "thread": "nvmf_tgt_poll_group_000", 00:19:06.901 "listen_address": { 00:19:06.901 "trtype": "TCP", 00:19:06.901 "adrfam": "IPv4", 00:19:06.901 "traddr": "10.0.0.2", 00:19:06.901 "trsvcid": "4420" 00:19:06.901 }, 00:19:06.901 "peer_address": { 00:19:06.901 "trtype": "TCP", 00:19:06.901 "adrfam": "IPv4", 00:19:06.901 "traddr": "10.0.0.1", 00:19:06.901 "trsvcid": "43550" 00:19:06.901 }, 00:19:06.901 "auth": { 00:19:06.901 "state": "completed", 00:19:06.901 "digest": "sha256", 00:19:06.901 "dhgroup": "ffdhe3072" 00:19:06.901 } 00:19:06.901 } 00:19:06.901 ]' 00:19:06.901 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.901 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.901 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.901 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:06.901 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.901 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.901 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.901 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.161 15:24:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:19:07.734 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.734 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:07.734 15:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.734 15:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.734 15:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.734 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.734 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:07.734 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:07.995 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:07.995 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.995 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:07.995 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:07.995 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:07.995 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.995 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:19:07.995 15:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.995 15:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.995 15:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.995 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.995 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.256 00:19:08.256 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.256 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.256 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.517 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.517 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.517 15:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.517 15:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.517 15:24:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.517 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.517 { 00:19:08.517 "cntlid": 23, 00:19:08.517 "qid": 0, 00:19:08.517 "state": "enabled", 00:19:08.517 "thread": "nvmf_tgt_poll_group_000", 00:19:08.517 "listen_address": { 00:19:08.517 "trtype": "TCP", 00:19:08.517 "adrfam": "IPv4", 00:19:08.517 "traddr": "10.0.0.2", 00:19:08.517 "trsvcid": "4420" 00:19:08.517 }, 00:19:08.517 "peer_address": { 00:19:08.517 "trtype": "TCP", 00:19:08.517 "adrfam": "IPv4", 00:19:08.517 "traddr": "10.0.0.1", 00:19:08.517 "trsvcid": "43572" 00:19:08.517 }, 00:19:08.517 "auth": { 00:19:08.517 "state": "completed", 00:19:08.517 "digest": "sha256", 00:19:08.517 "dhgroup": "ffdhe3072" 00:19:08.517 } 00:19:08.517 } 00:19:08.517 ]' 00:19:08.517 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.517 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.517 15:24:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.517 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:08.517 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.517 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.517 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.517 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.778 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:19:09.350 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.350 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:09.350 15:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.350 15:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.350 15:24:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.350 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:09.350 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.350 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:09.350 15:24:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:09.611 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:09.611 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.611 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.611 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:09.611 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:09.611 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.611 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.611 15:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.611 15:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.611 15:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.611 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.611 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:09.872 00:19:09.872 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.872 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.872 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.132 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.132 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.133 15:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.133 15:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.133 15:24:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.133 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.133 { 00:19:10.133 "cntlid": 25, 00:19:10.133 "qid": 0, 00:19:10.133 "state": "enabled", 00:19:10.133 "thread": "nvmf_tgt_poll_group_000", 00:19:10.133 "listen_address": { 00:19:10.133 "trtype": "TCP", 00:19:10.133 "adrfam": "IPv4", 00:19:10.133 "traddr": "10.0.0.2", 00:19:10.133 "trsvcid": "4420" 00:19:10.133 }, 00:19:10.133 "peer_address": { 00:19:10.133 "trtype": "TCP", 00:19:10.133 "adrfam": "IPv4", 00:19:10.133 "traddr": "10.0.0.1", 00:19:10.133 "trsvcid": "43588" 00:19:10.133 }, 00:19:10.133 "auth": { 00:19:10.133 "state": "completed", 00:19:10.133 "digest": "sha256", 00:19:10.133 "dhgroup": "ffdhe4096" 00:19:10.133 } 00:19:10.133 } 00:19:10.133 ]' 00:19:10.133 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.133 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.133 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.133 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.133 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.133 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.133 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.133 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.394 15:24:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:19:10.993 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.993 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:10.993 15:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.993 15:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.993 15:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.993 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.993 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:10.993 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:11.254 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:11.254 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.254 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.254 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:11.254 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:11.254 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.254 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.254 15:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.254 15:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.254 15:24:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.254 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.254 15:24:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:11.515 00:19:11.515 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.515 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.515 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.775 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.775 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.775 15:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.775 15:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.775 15:24:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.775 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.775 { 00:19:11.775 "cntlid": 27, 00:19:11.775 "qid": 0, 00:19:11.775 "state": "enabled", 00:19:11.775 "thread": "nvmf_tgt_poll_group_000", 00:19:11.775 "listen_address": { 00:19:11.775 "trtype": "TCP", 00:19:11.775 "adrfam": "IPv4", 00:19:11.775 "traddr": "10.0.0.2", 00:19:11.775 "trsvcid": "4420" 00:19:11.775 }, 00:19:11.775 "peer_address": { 00:19:11.775 "trtype": "TCP", 00:19:11.775 "adrfam": "IPv4", 00:19:11.775 "traddr": "10.0.0.1", 00:19:11.775 "trsvcid": "43612" 00:19:11.775 }, 00:19:11.775 "auth": { 00:19:11.775 "state": "completed", 00:19:11.775 "digest": "sha256", 00:19:11.775 "dhgroup": "ffdhe4096" 00:19:11.775 } 00:19:11.775 } 00:19:11.775 ]' 00:19:11.776 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.776 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.776 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.776 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:11.776 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.776 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.776 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.776 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.036 15:24:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:19:12.606 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.606 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:12.606 15:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.606 15:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.606 15:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.606 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.606 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:12.606 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:12.866 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:12.866 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.866 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:12.866 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:12.866 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:12.866 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.866 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.866 15:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.866 15:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.866 15:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.866 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.866 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:13.127 00:19:13.127 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.127 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.127 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.388 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.388 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.388 15:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.388 15:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.388 15:24:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.388 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.388 { 00:19:13.388 "cntlid": 29, 00:19:13.388 "qid": 0, 00:19:13.388 "state": "enabled", 00:19:13.388 "thread": "nvmf_tgt_poll_group_000", 00:19:13.388 "listen_address": { 00:19:13.388 "trtype": "TCP", 00:19:13.388 "adrfam": "IPv4", 00:19:13.388 "traddr": "10.0.0.2", 00:19:13.388 "trsvcid": "4420" 00:19:13.388 }, 00:19:13.388 "peer_address": { 00:19:13.388 "trtype": "TCP", 00:19:13.388 "adrfam": "IPv4", 00:19:13.388 "traddr": "10.0.0.1", 00:19:13.388 "trsvcid": "43648" 00:19:13.388 }, 00:19:13.388 "auth": { 00:19:13.388 "state": "completed", 00:19:13.388 "digest": "sha256", 00:19:13.388 "dhgroup": "ffdhe4096" 00:19:13.388 } 00:19:13.388 } 00:19:13.388 ]' 00:19:13.388 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.388 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.389 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.389 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:13.389 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.389 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.389 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.389 15:24:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.650 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:19:14.221 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.221 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:14.221 15:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.221 15:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.221 15:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.221 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.221 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:14.221 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:14.481 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:14.481 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.481 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:14.481 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:14.481 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:14.481 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.481 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:19:14.481 15:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.481 15:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.481 15:24:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.481 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.481 15:24:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.741 00:19:14.741 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.741 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.741 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.002 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.002 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.002 15:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.002 15:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.002 15:24:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.002 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.002 { 00:19:15.002 "cntlid": 31, 00:19:15.002 "qid": 0, 00:19:15.002 "state": "enabled", 00:19:15.002 "thread": "nvmf_tgt_poll_group_000", 00:19:15.002 "listen_address": { 00:19:15.002 "trtype": "TCP", 00:19:15.002 "adrfam": "IPv4", 00:19:15.002 "traddr": "10.0.0.2", 00:19:15.002 "trsvcid": "4420" 00:19:15.002 }, 00:19:15.002 "peer_address": { 00:19:15.002 "trtype": "TCP", 00:19:15.002 "adrfam": "IPv4", 00:19:15.002 "traddr": "10.0.0.1", 00:19:15.002 "trsvcid": "43684" 00:19:15.002 }, 00:19:15.002 "auth": { 00:19:15.002 "state": "completed", 00:19:15.002 "digest": "sha256", 00:19:15.002 "dhgroup": "ffdhe4096" 00:19:15.002 } 00:19:15.002 } 00:19:15.002 ]' 00:19:15.002 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.002 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.002 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.002 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:15.002 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.002 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.002 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.002 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.263 15:24:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:19:15.835 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.835 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:15.835 15:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.835 15:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.835 15:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.835 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.835 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.835 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:15.835 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:16.095 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:16.095 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.095 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:16.095 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:16.095 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:16.095 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.095 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.095 15:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.095 15:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.095 15:24:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.095 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.095 15:24:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.668 00:19:16.668 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.668 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.668 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.668 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.668 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.668 15:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.668 15:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.668 15:24:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.668 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.668 { 00:19:16.668 "cntlid": 33, 00:19:16.668 "qid": 0, 00:19:16.668 "state": "enabled", 00:19:16.668 "thread": "nvmf_tgt_poll_group_000", 00:19:16.668 "listen_address": { 00:19:16.668 "trtype": "TCP", 00:19:16.668 "adrfam": "IPv4", 00:19:16.668 "traddr": "10.0.0.2", 00:19:16.668 "trsvcid": "4420" 00:19:16.668 }, 00:19:16.668 "peer_address": { 00:19:16.668 "trtype": "TCP", 00:19:16.668 "adrfam": "IPv4", 00:19:16.668 "traddr": "10.0.0.1", 00:19:16.668 "trsvcid": "48388" 00:19:16.668 }, 00:19:16.668 "auth": { 00:19:16.668 "state": "completed", 00:19:16.668 "digest": "sha256", 00:19:16.668 "dhgroup": "ffdhe6144" 00:19:16.668 } 00:19:16.668 } 00:19:16.668 ]' 00:19:16.668 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.668 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.668 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.668 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:16.668 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.929 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.929 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.929 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.929 15:24:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.871 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:18.131 00:19:18.131 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.131 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.131 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.392 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.393 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.393 15:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.393 15:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.393 15:24:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.393 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.393 { 00:19:18.393 "cntlid": 35, 00:19:18.393 "qid": 0, 00:19:18.393 "state": "enabled", 00:19:18.393 "thread": "nvmf_tgt_poll_group_000", 00:19:18.393 "listen_address": { 00:19:18.393 "trtype": "TCP", 00:19:18.393 "adrfam": "IPv4", 00:19:18.393 "traddr": "10.0.0.2", 00:19:18.393 "trsvcid": "4420" 00:19:18.393 }, 00:19:18.393 "peer_address": { 00:19:18.393 "trtype": "TCP", 00:19:18.393 "adrfam": "IPv4", 00:19:18.393 "traddr": "10.0.0.1", 00:19:18.393 "trsvcid": "48400" 00:19:18.393 }, 00:19:18.393 "auth": { 00:19:18.393 "state": "completed", 00:19:18.393 "digest": "sha256", 00:19:18.393 "dhgroup": "ffdhe6144" 00:19:18.393 } 00:19:18.393 } 00:19:18.393 ]' 00:19:18.393 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.393 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.393 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.393 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:18.393 15:24:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.653 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.653 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.653 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.653 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:19:19.593 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.593 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:19.593 15:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.593 15:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.593 15:24:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.593 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.593 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:19.593 15:24:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:19.593 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:19.593 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.593 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.593 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:19.593 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:19.593 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.593 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.593 15:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.593 15:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.593 15:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.593 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.593 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.165 00:19:20.165 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.165 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.165 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.165 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.165 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.165 15:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.165 15:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.165 15:24:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.165 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.165 { 00:19:20.165 "cntlid": 37, 00:19:20.165 "qid": 0, 00:19:20.165 "state": "enabled", 00:19:20.165 "thread": "nvmf_tgt_poll_group_000", 00:19:20.165 "listen_address": { 00:19:20.165 "trtype": "TCP", 00:19:20.165 "adrfam": "IPv4", 00:19:20.165 "traddr": "10.0.0.2", 00:19:20.165 "trsvcid": "4420" 00:19:20.165 }, 00:19:20.165 "peer_address": { 00:19:20.165 "trtype": "TCP", 00:19:20.165 "adrfam": "IPv4", 00:19:20.165 "traddr": "10.0.0.1", 00:19:20.165 "trsvcid": "48428" 00:19:20.165 }, 00:19:20.165 "auth": { 00:19:20.165 "state": "completed", 00:19:20.165 "digest": "sha256", 00:19:20.165 "dhgroup": "ffdhe6144" 00:19:20.165 } 00:19:20.165 } 00:19:20.165 ]' 00:19:20.165 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.165 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.165 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.165 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:20.165 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.425 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.426 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.426 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.426 15:24:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:19:21.368 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.369 15:24:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.629 00:19:21.629 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.629 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.629 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.890 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.890 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.890 15:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.890 15:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.890 15:24:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.890 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.890 { 00:19:21.890 "cntlid": 39, 00:19:21.890 "qid": 0, 00:19:21.890 "state": "enabled", 00:19:21.890 "thread": "nvmf_tgt_poll_group_000", 00:19:21.890 "listen_address": { 00:19:21.890 "trtype": "TCP", 00:19:21.890 "adrfam": "IPv4", 00:19:21.890 "traddr": "10.0.0.2", 00:19:21.890 "trsvcid": "4420" 00:19:21.890 }, 00:19:21.890 "peer_address": { 00:19:21.890 "trtype": "TCP", 00:19:21.890 "adrfam": "IPv4", 00:19:21.890 "traddr": "10.0.0.1", 00:19:21.890 "trsvcid": "48474" 00:19:21.890 }, 00:19:21.890 "auth": { 00:19:21.890 "state": "completed", 00:19:21.890 "digest": "sha256", 00:19:21.890 "dhgroup": "ffdhe6144" 00:19:21.890 } 00:19:21.890 } 00:19:21.890 ]' 00:19:21.890 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.890 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.890 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.149 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:22.149 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.149 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.149 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.149 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.149 15:24:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.090 15:24:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.660 00:19:23.660 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.660 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.660 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.919 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.919 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.919 15:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.919 15:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.919 15:24:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.919 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.919 { 00:19:23.919 "cntlid": 41, 00:19:23.919 "qid": 0, 00:19:23.919 "state": "enabled", 00:19:23.919 "thread": "nvmf_tgt_poll_group_000", 00:19:23.919 "listen_address": { 00:19:23.919 "trtype": "TCP", 00:19:23.919 "adrfam": "IPv4", 00:19:23.919 "traddr": "10.0.0.2", 00:19:23.919 "trsvcid": "4420" 00:19:23.919 }, 00:19:23.919 "peer_address": { 00:19:23.919 "trtype": "TCP", 00:19:23.919 "adrfam": "IPv4", 00:19:23.919 "traddr": "10.0.0.1", 00:19:23.919 "trsvcid": "48492" 00:19:23.919 }, 00:19:23.919 "auth": { 00:19:23.919 "state": "completed", 00:19:23.919 "digest": "sha256", 00:19:23.919 "dhgroup": "ffdhe8192" 00:19:23.919 } 00:19:23.919 } 00:19:23.919 ]' 00:19:23.919 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.919 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.919 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.919 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:23.919 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.919 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.919 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.919 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.179 15:24:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.119 15:24:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.688 00:19:25.688 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.688 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.688 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.688 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.688 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.688 15:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.688 15:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.948 15:24:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.948 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.948 { 00:19:25.948 "cntlid": 43, 00:19:25.948 "qid": 0, 00:19:25.948 "state": "enabled", 00:19:25.948 "thread": "nvmf_tgt_poll_group_000", 00:19:25.948 "listen_address": { 00:19:25.948 "trtype": "TCP", 00:19:25.948 "adrfam": "IPv4", 00:19:25.948 "traddr": "10.0.0.2", 00:19:25.948 "trsvcid": "4420" 00:19:25.948 }, 00:19:25.948 "peer_address": { 00:19:25.948 "trtype": "TCP", 00:19:25.948 "adrfam": "IPv4", 00:19:25.948 "traddr": "10.0.0.1", 00:19:25.948 "trsvcid": "48526" 00:19:25.948 }, 00:19:25.948 "auth": { 00:19:25.948 "state": "completed", 00:19:25.948 "digest": "sha256", 00:19:25.948 "dhgroup": "ffdhe8192" 00:19:25.948 } 00:19:25.948 } 00:19:25.948 ]' 00:19:25.948 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.948 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.948 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.948 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:25.948 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.948 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.948 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.948 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.207 15:24:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:19:26.777 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.777 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:26.777 15:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.777 15:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.777 15:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.777 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.777 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:26.777 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.036 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:27.036 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.036 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:27.036 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:27.037 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:27.037 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.037 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.037 15:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.037 15:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.037 15:24:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.037 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.037 15:24:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.606 00:19:27.606 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.606 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.606 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.865 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.865 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.865 15:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.865 15:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.865 15:24:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.865 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.865 { 00:19:27.865 "cntlid": 45, 00:19:27.865 "qid": 0, 00:19:27.865 "state": "enabled", 00:19:27.865 "thread": "nvmf_tgt_poll_group_000", 00:19:27.865 "listen_address": { 00:19:27.865 "trtype": "TCP", 00:19:27.865 "adrfam": "IPv4", 00:19:27.865 "traddr": "10.0.0.2", 00:19:27.865 "trsvcid": "4420" 00:19:27.865 }, 00:19:27.865 "peer_address": { 00:19:27.865 "trtype": "TCP", 00:19:27.865 "adrfam": "IPv4", 00:19:27.865 "traddr": "10.0.0.1", 00:19:27.865 "trsvcid": "39342" 00:19:27.865 }, 00:19:27.865 "auth": { 00:19:27.865 "state": "completed", 00:19:27.865 "digest": "sha256", 00:19:27.865 "dhgroup": "ffdhe8192" 00:19:27.865 } 00:19:27.865 } 00:19:27.866 ]' 00:19:27.866 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.866 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.866 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.866 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:27.866 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.866 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.866 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.866 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.125 15:24:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:19:28.710 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.710 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:28.710 15:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.710 15:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.710 15:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.710 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.710 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.710 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:28.996 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:28.996 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.996 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.996 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:28.996 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:28.996 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.996 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:19:28.996 15:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.996 15:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.996 15:24:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.996 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.996 15:24:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:29.565 00:19:29.565 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.565 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.565 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.825 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.826 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.826 15:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.826 15:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.826 15:24:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.826 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.826 { 00:19:29.826 "cntlid": 47, 00:19:29.826 "qid": 0, 00:19:29.826 "state": "enabled", 00:19:29.826 "thread": "nvmf_tgt_poll_group_000", 00:19:29.826 "listen_address": { 00:19:29.826 "trtype": "TCP", 00:19:29.826 "adrfam": "IPv4", 00:19:29.826 "traddr": "10.0.0.2", 00:19:29.826 "trsvcid": "4420" 00:19:29.826 }, 00:19:29.826 "peer_address": { 00:19:29.826 "trtype": "TCP", 00:19:29.826 "adrfam": "IPv4", 00:19:29.826 "traddr": "10.0.0.1", 00:19:29.826 "trsvcid": "39376" 00:19:29.826 }, 00:19:29.826 "auth": { 00:19:29.826 "state": "completed", 00:19:29.826 "digest": "sha256", 00:19:29.826 "dhgroup": "ffdhe8192" 00:19:29.826 } 00:19:29.826 } 00:19:29.826 ]' 00:19:29.826 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.826 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.826 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.826 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:29.826 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.826 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.826 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.826 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.085 15:24:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:19:30.655 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.655 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:30.655 15:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.655 15:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.655 15:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.655 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:30.655 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.655 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.655 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:30.655 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:30.919 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:30.919 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.919 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:30.919 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:30.919 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:30.919 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.919 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.919 15:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.919 15:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.919 15:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.919 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.919 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.180 00:19:31.180 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.180 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.180 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.180 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.180 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.180 15:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.180 15:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.180 15:24:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.180 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.180 { 00:19:31.180 "cntlid": 49, 00:19:31.180 "qid": 0, 00:19:31.180 "state": "enabled", 00:19:31.180 "thread": "nvmf_tgt_poll_group_000", 00:19:31.180 "listen_address": { 00:19:31.180 "trtype": "TCP", 00:19:31.180 "adrfam": "IPv4", 00:19:31.180 "traddr": "10.0.0.2", 00:19:31.180 "trsvcid": "4420" 00:19:31.180 }, 00:19:31.180 "peer_address": { 00:19:31.180 "trtype": "TCP", 00:19:31.180 "adrfam": "IPv4", 00:19:31.180 "traddr": "10.0.0.1", 00:19:31.180 "trsvcid": "39394" 00:19:31.180 }, 00:19:31.180 "auth": { 00:19:31.180 "state": "completed", 00:19:31.180 "digest": "sha384", 00:19:31.180 "dhgroup": "null" 00:19:31.180 } 00:19:31.180 } 00:19:31.180 ]' 00:19:31.439 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.439 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:31.439 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.439 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:31.439 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.439 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.439 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.439 15:24:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.699 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:19:32.269 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.269 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:32.269 15:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.269 15:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.269 15:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.269 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.270 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:32.270 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:32.530 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:32.530 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.530 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:32.530 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:32.530 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:32.530 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.530 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.530 15:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.530 15:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.530 15:24:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.530 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.530 15:24:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.789 00:19:32.789 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.789 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.789 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.789 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.789 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.789 15:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.789 15:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.789 15:24:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.789 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.789 { 00:19:32.789 "cntlid": 51, 00:19:32.789 "qid": 0, 00:19:32.789 "state": "enabled", 00:19:32.789 "thread": "nvmf_tgt_poll_group_000", 00:19:32.789 "listen_address": { 00:19:32.789 "trtype": "TCP", 00:19:32.789 "adrfam": "IPv4", 00:19:32.789 "traddr": "10.0.0.2", 00:19:32.790 "trsvcid": "4420" 00:19:32.790 }, 00:19:32.790 "peer_address": { 00:19:32.790 "trtype": "TCP", 00:19:32.790 "adrfam": "IPv4", 00:19:32.790 "traddr": "10.0.0.1", 00:19:32.790 "trsvcid": "39422" 00:19:32.790 }, 00:19:32.790 "auth": { 00:19:32.790 "state": "completed", 00:19:32.790 "digest": "sha384", 00:19:32.790 "dhgroup": "null" 00:19:32.790 } 00:19:32.790 } 00:19:32.790 ]' 00:19:32.790 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.049 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.049 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.049 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:33.049 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.049 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.049 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.049 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.309 15:24:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:19:33.877 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.878 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:33.878 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.878 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.878 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.878 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.878 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:33.878 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:34.137 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:34.137 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.137 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:34.137 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:34.137 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:34.137 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.137 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.137 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.137 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.137 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.137 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.137 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.137 00:19:34.398 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.398 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.398 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.398 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.398 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.398 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.398 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.398 15:24:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.398 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.398 { 00:19:34.398 "cntlid": 53, 00:19:34.398 "qid": 0, 00:19:34.398 "state": "enabled", 00:19:34.398 "thread": "nvmf_tgt_poll_group_000", 00:19:34.398 "listen_address": { 00:19:34.398 "trtype": "TCP", 00:19:34.398 "adrfam": "IPv4", 00:19:34.398 "traddr": "10.0.0.2", 00:19:34.398 "trsvcid": "4420" 00:19:34.398 }, 00:19:34.398 "peer_address": { 00:19:34.398 "trtype": "TCP", 00:19:34.398 "adrfam": "IPv4", 00:19:34.398 "traddr": "10.0.0.1", 00:19:34.398 "trsvcid": "39462" 00:19:34.398 }, 00:19:34.398 "auth": { 00:19:34.398 "state": "completed", 00:19:34.398 "digest": "sha384", 00:19:34.398 "dhgroup": "null" 00:19:34.398 } 00:19:34.398 } 00:19:34.398 ]' 00:19:34.398 15:24:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.398 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.398 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.658 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:34.659 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.659 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.659 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.659 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.659 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:19:35.599 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.599 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:35.599 15:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.599 15:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.599 15:24:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.599 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.599 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:35.599 15:24:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:35.599 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:35.599 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.599 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:35.599 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:35.599 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:35.599 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.599 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:19:35.599 15:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.599 15:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.599 15:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.599 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.599 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.859 00:19:35.859 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.859 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.859 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.119 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.119 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.119 15:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.119 15:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.119 15:24:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.119 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.119 { 00:19:36.119 "cntlid": 55, 00:19:36.119 "qid": 0, 00:19:36.119 "state": "enabled", 00:19:36.119 "thread": "nvmf_tgt_poll_group_000", 00:19:36.119 "listen_address": { 00:19:36.119 "trtype": "TCP", 00:19:36.119 "adrfam": "IPv4", 00:19:36.119 "traddr": "10.0.0.2", 00:19:36.119 "trsvcid": "4420" 00:19:36.119 }, 00:19:36.119 "peer_address": { 00:19:36.119 "trtype": "TCP", 00:19:36.119 "adrfam": "IPv4", 00:19:36.119 "traddr": "10.0.0.1", 00:19:36.119 "trsvcid": "50756" 00:19:36.119 }, 00:19:36.119 "auth": { 00:19:36.119 "state": "completed", 00:19:36.119 "digest": "sha384", 00:19:36.119 "dhgroup": "null" 00:19:36.119 } 00:19:36.119 } 00:19:36.119 ]' 00:19:36.119 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.119 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.119 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.119 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:36.119 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.119 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.119 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.119 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.379 15:24:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:19:36.949 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.949 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:36.949 15:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.949 15:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.949 15:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.949 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.949 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.949 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:36.949 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:37.210 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:37.210 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.210 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:37.210 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:37.210 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:37.210 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.210 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.210 15:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.210 15:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.210 15:24:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.210 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.210 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.469 00:19:37.469 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.469 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.469 15:24:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.727 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.727 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.727 15:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.727 15:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.727 15:24:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.727 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.727 { 00:19:37.727 "cntlid": 57, 00:19:37.727 "qid": 0, 00:19:37.727 "state": "enabled", 00:19:37.727 "thread": "nvmf_tgt_poll_group_000", 00:19:37.727 "listen_address": { 00:19:37.727 "trtype": "TCP", 00:19:37.727 "adrfam": "IPv4", 00:19:37.727 "traddr": "10.0.0.2", 00:19:37.727 "trsvcid": "4420" 00:19:37.727 }, 00:19:37.727 "peer_address": { 00:19:37.727 "trtype": "TCP", 00:19:37.727 "adrfam": "IPv4", 00:19:37.727 "traddr": "10.0.0.1", 00:19:37.727 "trsvcid": "50792" 00:19:37.727 }, 00:19:37.727 "auth": { 00:19:37.727 "state": "completed", 00:19:37.727 "digest": "sha384", 00:19:37.727 "dhgroup": "ffdhe2048" 00:19:37.727 } 00:19:37.727 } 00:19:37.727 ]' 00:19:37.727 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.727 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.728 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.728 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.728 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.728 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.728 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.728 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.987 15:24:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:19:38.557 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.557 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:38.557 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.557 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.557 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.557 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.557 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:38.557 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:38.817 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:38.817 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.817 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:38.817 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:38.817 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:38.817 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.817 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.817 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.817 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.817 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.818 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.818 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.078 00:19:39.078 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.078 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.078 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.078 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.078 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.078 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.078 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.338 15:24:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.338 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.338 { 00:19:39.338 "cntlid": 59, 00:19:39.338 "qid": 0, 00:19:39.338 "state": "enabled", 00:19:39.338 "thread": "nvmf_tgt_poll_group_000", 00:19:39.338 "listen_address": { 00:19:39.338 "trtype": "TCP", 00:19:39.338 "adrfam": "IPv4", 00:19:39.338 "traddr": "10.0.0.2", 00:19:39.338 "trsvcid": "4420" 00:19:39.338 }, 00:19:39.338 "peer_address": { 00:19:39.338 "trtype": "TCP", 00:19:39.338 "adrfam": "IPv4", 00:19:39.338 "traddr": "10.0.0.1", 00:19:39.338 "trsvcid": "50810" 00:19:39.338 }, 00:19:39.338 "auth": { 00:19:39.338 "state": "completed", 00:19:39.338 "digest": "sha384", 00:19:39.338 "dhgroup": "ffdhe2048" 00:19:39.338 } 00:19:39.338 } 00:19:39.338 ]' 00:19:39.338 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.338 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.338 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.338 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:39.338 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.338 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.338 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.338 15:24:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.598 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:19:40.167 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.167 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:40.167 15:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.167 15:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.167 15:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.167 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.167 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:40.167 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:40.427 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:40.427 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.427 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:40.427 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:40.427 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:40.427 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.427 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.427 15:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.427 15:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.427 15:24:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.427 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.427 15:24:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.686 00:19:40.686 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.686 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.686 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.947 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.947 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.947 15:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.947 15:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.947 15:24:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.947 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.947 { 00:19:40.947 "cntlid": 61, 00:19:40.947 "qid": 0, 00:19:40.947 "state": "enabled", 00:19:40.947 "thread": "nvmf_tgt_poll_group_000", 00:19:40.947 "listen_address": { 00:19:40.947 "trtype": "TCP", 00:19:40.947 "adrfam": "IPv4", 00:19:40.947 "traddr": "10.0.0.2", 00:19:40.947 "trsvcid": "4420" 00:19:40.947 }, 00:19:40.947 "peer_address": { 00:19:40.947 "trtype": "TCP", 00:19:40.947 "adrfam": "IPv4", 00:19:40.947 "traddr": "10.0.0.1", 00:19:40.947 "trsvcid": "50826" 00:19:40.947 }, 00:19:40.947 "auth": { 00:19:40.947 "state": "completed", 00:19:40.947 "digest": "sha384", 00:19:40.947 "dhgroup": "ffdhe2048" 00:19:40.947 } 00:19:40.947 } 00:19:40.947 ]' 00:19:40.947 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.947 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.947 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.947 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:40.947 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.947 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.947 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.947 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.207 15:24:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:19:41.775 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.775 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:41.775 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.775 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.775 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.775 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.775 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:41.775 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:42.035 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:42.035 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.035 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:42.035 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:42.035 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:42.035 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.035 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:19:42.035 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.035 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.035 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.035 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.035 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.295 00:19:42.295 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.295 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.295 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.295 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.295 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.295 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.295 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.295 15:24:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.295 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.295 { 00:19:42.295 "cntlid": 63, 00:19:42.295 "qid": 0, 00:19:42.295 "state": "enabled", 00:19:42.295 "thread": "nvmf_tgt_poll_group_000", 00:19:42.295 "listen_address": { 00:19:42.295 "trtype": "TCP", 00:19:42.295 "adrfam": "IPv4", 00:19:42.295 "traddr": "10.0.0.2", 00:19:42.296 "trsvcid": "4420" 00:19:42.296 }, 00:19:42.296 "peer_address": { 00:19:42.296 "trtype": "TCP", 00:19:42.296 "adrfam": "IPv4", 00:19:42.296 "traddr": "10.0.0.1", 00:19:42.296 "trsvcid": "50862" 00:19:42.296 }, 00:19:42.296 "auth": { 00:19:42.296 "state": "completed", 00:19:42.296 "digest": "sha384", 00:19:42.296 "dhgroup": "ffdhe2048" 00:19:42.296 } 00:19:42.296 } 00:19:42.296 ]' 00:19:42.296 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.556 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.556 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.556 15:24:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:42.556 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.556 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.556 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.556 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.816 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:19:43.386 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.386 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:43.386 15:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.386 15:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.386 15:24:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.386 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.386 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.386 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:43.386 15:24:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:43.645 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:43.645 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.645 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:43.645 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:43.645 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:43.645 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.645 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.645 15:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.645 15:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.645 15:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.645 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.645 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.905 00:19:43.905 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.905 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.905 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.905 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.905 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.905 15:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.905 15:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.905 15:24:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.905 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.905 { 00:19:43.905 "cntlid": 65, 00:19:43.905 "qid": 0, 00:19:43.905 "state": "enabled", 00:19:43.905 "thread": "nvmf_tgt_poll_group_000", 00:19:43.905 "listen_address": { 00:19:43.905 "trtype": "TCP", 00:19:43.905 "adrfam": "IPv4", 00:19:43.905 "traddr": "10.0.0.2", 00:19:43.905 "trsvcid": "4420" 00:19:43.905 }, 00:19:43.905 "peer_address": { 00:19:43.905 "trtype": "TCP", 00:19:43.905 "adrfam": "IPv4", 00:19:43.905 "traddr": "10.0.0.1", 00:19:43.905 "trsvcid": "50896" 00:19:43.905 }, 00:19:43.905 "auth": { 00:19:43.905 "state": "completed", 00:19:43.905 "digest": "sha384", 00:19:43.905 "dhgroup": "ffdhe3072" 00:19:43.905 } 00:19:43.905 } 00:19:43.905 ]' 00:19:43.905 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.165 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.165 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.165 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:44.165 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.165 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.165 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.165 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.425 15:24:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:19:44.994 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.994 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:44.994 15:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.994 15:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.994 15:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.994 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.994 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:44.994 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:45.311 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:45.312 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.312 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:45.312 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:45.312 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:45.312 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.312 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.312 15:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.312 15:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.312 15:24:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.312 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.312 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.312 00:19:45.312 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.312 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.312 15:24:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.585 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.585 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.585 15:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.585 15:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.585 15:24:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.585 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.585 { 00:19:45.585 "cntlid": 67, 00:19:45.585 "qid": 0, 00:19:45.585 "state": "enabled", 00:19:45.585 "thread": "nvmf_tgt_poll_group_000", 00:19:45.585 "listen_address": { 00:19:45.585 "trtype": "TCP", 00:19:45.585 "adrfam": "IPv4", 00:19:45.585 "traddr": "10.0.0.2", 00:19:45.585 "trsvcid": "4420" 00:19:45.585 }, 00:19:45.585 "peer_address": { 00:19:45.585 "trtype": "TCP", 00:19:45.585 "adrfam": "IPv4", 00:19:45.585 "traddr": "10.0.0.1", 00:19:45.585 "trsvcid": "50918" 00:19:45.585 }, 00:19:45.585 "auth": { 00:19:45.585 "state": "completed", 00:19:45.585 "digest": "sha384", 00:19:45.585 "dhgroup": "ffdhe3072" 00:19:45.585 } 00:19:45.585 } 00:19:45.585 ]' 00:19:45.585 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.585 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.585 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.586 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:45.586 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.874 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.874 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.874 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.874 15:24:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.812 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.071 00:19:47.071 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.071 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.071 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.071 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.330 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.330 15:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.330 15:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.330 15:24:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.330 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.330 { 00:19:47.330 "cntlid": 69, 00:19:47.330 "qid": 0, 00:19:47.330 "state": "enabled", 00:19:47.330 "thread": "nvmf_tgt_poll_group_000", 00:19:47.330 "listen_address": { 00:19:47.330 "trtype": "TCP", 00:19:47.330 "adrfam": "IPv4", 00:19:47.330 "traddr": "10.0.0.2", 00:19:47.330 "trsvcid": "4420" 00:19:47.330 }, 00:19:47.330 "peer_address": { 00:19:47.330 "trtype": "TCP", 00:19:47.330 "adrfam": "IPv4", 00:19:47.330 "traddr": "10.0.0.1", 00:19:47.330 "trsvcid": "42542" 00:19:47.330 }, 00:19:47.330 "auth": { 00:19:47.330 "state": "completed", 00:19:47.330 "digest": "sha384", 00:19:47.330 "dhgroup": "ffdhe3072" 00:19:47.330 } 00:19:47.330 } 00:19:47.330 ]' 00:19:47.330 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.330 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.330 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.330 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:47.330 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.330 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.330 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.330 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.589 15:24:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:19:48.158 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.158 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:48.158 15:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.159 15:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.159 15:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.159 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.159 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:48.159 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:48.419 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:48.419 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.419 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.419 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:48.419 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:48.419 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.419 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:19:48.419 15:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.419 15:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.419 15:24:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.419 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.419 15:24:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.679 00:19:48.680 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.680 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.680 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.680 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.680 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.680 15:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.680 15:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.940 15:24:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.940 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.940 { 00:19:48.940 "cntlid": 71, 00:19:48.940 "qid": 0, 00:19:48.940 "state": "enabled", 00:19:48.940 "thread": "nvmf_tgt_poll_group_000", 00:19:48.940 "listen_address": { 00:19:48.940 "trtype": "TCP", 00:19:48.940 "adrfam": "IPv4", 00:19:48.940 "traddr": "10.0.0.2", 00:19:48.940 "trsvcid": "4420" 00:19:48.940 }, 00:19:48.940 "peer_address": { 00:19:48.940 "trtype": "TCP", 00:19:48.940 "adrfam": "IPv4", 00:19:48.940 "traddr": "10.0.0.1", 00:19:48.940 "trsvcid": "42564" 00:19:48.940 }, 00:19:48.940 "auth": { 00:19:48.940 "state": "completed", 00:19:48.940 "digest": "sha384", 00:19:48.940 "dhgroup": "ffdhe3072" 00:19:48.940 } 00:19:48.940 } 00:19:48.940 ]' 00:19:48.940 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.940 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.940 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.940 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:48.940 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.940 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.940 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.940 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.200 15:24:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:19:49.771 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.771 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:49.771 15:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.771 15:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.771 15:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.771 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.771 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.771 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:49.771 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:50.031 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:50.031 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.031 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:50.032 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:50.032 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:50.032 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.032 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.032 15:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.032 15:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.032 15:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.032 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.032 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.292 00:19:50.292 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.292 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.292 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.552 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.552 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.552 15:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.552 15:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.552 15:24:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.552 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:50.552 { 00:19:50.552 "cntlid": 73, 00:19:50.552 "qid": 0, 00:19:50.552 "state": "enabled", 00:19:50.552 "thread": "nvmf_tgt_poll_group_000", 00:19:50.552 "listen_address": { 00:19:50.553 "trtype": "TCP", 00:19:50.553 "adrfam": "IPv4", 00:19:50.553 "traddr": "10.0.0.2", 00:19:50.553 "trsvcid": "4420" 00:19:50.553 }, 00:19:50.553 "peer_address": { 00:19:50.553 "trtype": "TCP", 00:19:50.553 "adrfam": "IPv4", 00:19:50.553 "traddr": "10.0.0.1", 00:19:50.553 "trsvcid": "42590" 00:19:50.553 }, 00:19:50.553 "auth": { 00:19:50.553 "state": "completed", 00:19:50.553 "digest": "sha384", 00:19:50.553 "dhgroup": "ffdhe4096" 00:19:50.553 } 00:19:50.553 } 00:19:50.553 ]' 00:19:50.553 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:50.553 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:50.553 15:24:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.553 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:50.553 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.553 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.553 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.553 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.814 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:19:51.384 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.384 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:51.384 15:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.384 15:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.384 15:25:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.384 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.384 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:51.384 15:25:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:51.644 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:51.644 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.644 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:51.644 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:51.644 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:51.644 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.644 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.644 15:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.644 15:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.644 15:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.644 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.644 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.905 00:19:51.905 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.905 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.905 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.165 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.165 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.165 15:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.165 15:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.165 15:25:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.165 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.165 { 00:19:52.165 "cntlid": 75, 00:19:52.165 "qid": 0, 00:19:52.165 "state": "enabled", 00:19:52.165 "thread": "nvmf_tgt_poll_group_000", 00:19:52.165 "listen_address": { 00:19:52.165 "trtype": "TCP", 00:19:52.165 "adrfam": "IPv4", 00:19:52.165 "traddr": "10.0.0.2", 00:19:52.165 "trsvcid": "4420" 00:19:52.165 }, 00:19:52.165 "peer_address": { 00:19:52.165 "trtype": "TCP", 00:19:52.165 "adrfam": "IPv4", 00:19:52.165 "traddr": "10.0.0.1", 00:19:52.165 "trsvcid": "42620" 00:19:52.165 }, 00:19:52.165 "auth": { 00:19:52.165 "state": "completed", 00:19:52.165 "digest": "sha384", 00:19:52.165 "dhgroup": "ffdhe4096" 00:19:52.165 } 00:19:52.165 } 00:19:52.165 ]' 00:19:52.165 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.165 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.165 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.165 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:52.165 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.165 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.165 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.165 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.426 15:25:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:19:52.997 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.997 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:52.997 15:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.997 15:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.997 15:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.997 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.997 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:52.997 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:53.258 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:53.258 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.258 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:53.258 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:53.258 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:53.258 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.258 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.258 15:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.258 15:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.258 15:25:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.258 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.258 15:25:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.519 00:19:53.519 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.519 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.519 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.780 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.780 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.780 15:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.780 15:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.780 15:25:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.780 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.780 { 00:19:53.780 "cntlid": 77, 00:19:53.780 "qid": 0, 00:19:53.780 "state": "enabled", 00:19:53.780 "thread": "nvmf_tgt_poll_group_000", 00:19:53.780 "listen_address": { 00:19:53.780 "trtype": "TCP", 00:19:53.780 "adrfam": "IPv4", 00:19:53.780 "traddr": "10.0.0.2", 00:19:53.780 "trsvcid": "4420" 00:19:53.780 }, 00:19:53.780 "peer_address": { 00:19:53.780 "trtype": "TCP", 00:19:53.780 "adrfam": "IPv4", 00:19:53.780 "traddr": "10.0.0.1", 00:19:53.780 "trsvcid": "42642" 00:19:53.780 }, 00:19:53.780 "auth": { 00:19:53.780 "state": "completed", 00:19:53.780 "digest": "sha384", 00:19:53.780 "dhgroup": "ffdhe4096" 00:19:53.780 } 00:19:53.780 } 00:19:53.780 ]' 00:19:53.780 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.780 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.780 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.780 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:53.780 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.780 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.780 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.780 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.041 15:25:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:19:54.611 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.611 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:54.611 15:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.611 15:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.611 15:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.611 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.611 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:54.611 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:54.872 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:54.872 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.872 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:54.872 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:54.872 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:54.872 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.872 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:19:54.872 15:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.872 15:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.872 15:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.872 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.872 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.133 00:19:55.133 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.133 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.133 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.393 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.393 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.393 15:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.393 15:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.393 15:25:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.393 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.393 { 00:19:55.393 "cntlid": 79, 00:19:55.393 "qid": 0, 00:19:55.393 "state": "enabled", 00:19:55.393 "thread": "nvmf_tgt_poll_group_000", 00:19:55.393 "listen_address": { 00:19:55.393 "trtype": "TCP", 00:19:55.393 "adrfam": "IPv4", 00:19:55.393 "traddr": "10.0.0.2", 00:19:55.393 "trsvcid": "4420" 00:19:55.393 }, 00:19:55.393 "peer_address": { 00:19:55.393 "trtype": "TCP", 00:19:55.393 "adrfam": "IPv4", 00:19:55.393 "traddr": "10.0.0.1", 00:19:55.393 "trsvcid": "42682" 00:19:55.393 }, 00:19:55.393 "auth": { 00:19:55.393 "state": "completed", 00:19:55.393 "digest": "sha384", 00:19:55.393 "dhgroup": "ffdhe4096" 00:19:55.393 } 00:19:55.393 } 00:19:55.393 ]' 00:19:55.393 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.393 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.393 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.393 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:55.393 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.393 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.393 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.393 15:25:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.653 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:19:56.225 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.225 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:56.225 15:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.225 15:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.225 15:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.225 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.225 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.225 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:56.225 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:56.485 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:56.485 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.485 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:56.485 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:56.485 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:56.485 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.485 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.485 15:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.485 15:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.485 15:25:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.485 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.485 15:25:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.746 00:19:56.746 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.746 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.746 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.006 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.006 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.006 15:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.006 15:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.006 15:25:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.006 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.006 { 00:19:57.006 "cntlid": 81, 00:19:57.006 "qid": 0, 00:19:57.006 "state": "enabled", 00:19:57.006 "thread": "nvmf_tgt_poll_group_000", 00:19:57.006 "listen_address": { 00:19:57.006 "trtype": "TCP", 00:19:57.006 "adrfam": "IPv4", 00:19:57.006 "traddr": "10.0.0.2", 00:19:57.006 "trsvcid": "4420" 00:19:57.006 }, 00:19:57.006 "peer_address": { 00:19:57.006 "trtype": "TCP", 00:19:57.006 "adrfam": "IPv4", 00:19:57.006 "traddr": "10.0.0.1", 00:19:57.006 "trsvcid": "36648" 00:19:57.006 }, 00:19:57.006 "auth": { 00:19:57.006 "state": "completed", 00:19:57.006 "digest": "sha384", 00:19:57.006 "dhgroup": "ffdhe6144" 00:19:57.006 } 00:19:57.006 } 00:19:57.006 ]' 00:19:57.006 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.006 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.006 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.006 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:57.006 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.267 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.267 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.267 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.267 15:25:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.209 15:25:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.470 00:19:58.470 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.470 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.470 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.731 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.731 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.731 15:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.731 15:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.731 15:25:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.731 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.731 { 00:19:58.731 "cntlid": 83, 00:19:58.731 "qid": 0, 00:19:58.731 "state": "enabled", 00:19:58.731 "thread": "nvmf_tgt_poll_group_000", 00:19:58.731 "listen_address": { 00:19:58.731 "trtype": "TCP", 00:19:58.731 "adrfam": "IPv4", 00:19:58.731 "traddr": "10.0.0.2", 00:19:58.731 "trsvcid": "4420" 00:19:58.731 }, 00:19:58.731 "peer_address": { 00:19:58.731 "trtype": "TCP", 00:19:58.731 "adrfam": "IPv4", 00:19:58.731 "traddr": "10.0.0.1", 00:19:58.731 "trsvcid": "36682" 00:19:58.731 }, 00:19:58.731 "auth": { 00:19:58.731 "state": "completed", 00:19:58.731 "digest": "sha384", 00:19:58.731 "dhgroup": "ffdhe6144" 00:19:58.731 } 00:19:58.731 } 00:19:58.731 ]' 00:19:58.731 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.731 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.731 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.731 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:58.731 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.050 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.050 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.050 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.050 15:25:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.992 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.252 00:20:00.252 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.252 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.252 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.513 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.513 15:25:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.513 15:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.513 15:25:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.513 15:25:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.513 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.513 { 00:20:00.513 "cntlid": 85, 00:20:00.513 "qid": 0, 00:20:00.513 "state": "enabled", 00:20:00.513 "thread": "nvmf_tgt_poll_group_000", 00:20:00.513 "listen_address": { 00:20:00.513 "trtype": "TCP", 00:20:00.513 "adrfam": "IPv4", 00:20:00.513 "traddr": "10.0.0.2", 00:20:00.513 "trsvcid": "4420" 00:20:00.513 }, 00:20:00.513 "peer_address": { 00:20:00.513 "trtype": "TCP", 00:20:00.513 "adrfam": "IPv4", 00:20:00.513 "traddr": "10.0.0.1", 00:20:00.513 "trsvcid": "36702" 00:20:00.513 }, 00:20:00.513 "auth": { 00:20:00.513 "state": "completed", 00:20:00.513 "digest": "sha384", 00:20:00.513 "dhgroup": "ffdhe6144" 00:20:00.513 } 00:20:00.513 } 00:20:00.513 ]' 00:20:00.513 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.513 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.513 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.513 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:00.513 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.773 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.773 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.774 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.774 15:25:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.717 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.978 00:20:01.978 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.978 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.978 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.239 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.239 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.239 15:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.239 15:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.239 15:25:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.239 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.239 { 00:20:02.239 "cntlid": 87, 00:20:02.239 "qid": 0, 00:20:02.239 "state": "enabled", 00:20:02.239 "thread": "nvmf_tgt_poll_group_000", 00:20:02.239 "listen_address": { 00:20:02.239 "trtype": "TCP", 00:20:02.239 "adrfam": "IPv4", 00:20:02.239 "traddr": "10.0.0.2", 00:20:02.239 "trsvcid": "4420" 00:20:02.239 }, 00:20:02.239 "peer_address": { 00:20:02.239 "trtype": "TCP", 00:20:02.239 "adrfam": "IPv4", 00:20:02.239 "traddr": "10.0.0.1", 00:20:02.239 "trsvcid": "36730" 00:20:02.239 }, 00:20:02.239 "auth": { 00:20:02.239 "state": "completed", 00:20:02.239 "digest": "sha384", 00:20:02.239 "dhgroup": "ffdhe6144" 00:20:02.239 } 00:20:02.239 } 00:20:02.239 ]' 00:20:02.239 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.239 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.239 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.239 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:02.239 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.546 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.546 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.546 15:25:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.546 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:20:03.138 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.138 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:03.138 15:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.138 15:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.398 15:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.398 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.398 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.398 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:03.399 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:03.399 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:03.399 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.399 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.399 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:03.399 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:03.399 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.399 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.399 15:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.399 15:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.399 15:25:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.399 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.399 15:25:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.970 00:20:03.970 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.970 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.970 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.229 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.229 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.229 15:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.229 15:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.229 15:25:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.229 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.229 { 00:20:04.229 "cntlid": 89, 00:20:04.229 "qid": 0, 00:20:04.229 "state": "enabled", 00:20:04.229 "thread": "nvmf_tgt_poll_group_000", 00:20:04.229 "listen_address": { 00:20:04.229 "trtype": "TCP", 00:20:04.229 "adrfam": "IPv4", 00:20:04.229 "traddr": "10.0.0.2", 00:20:04.229 "trsvcid": "4420" 00:20:04.229 }, 00:20:04.229 "peer_address": { 00:20:04.229 "trtype": "TCP", 00:20:04.229 "adrfam": "IPv4", 00:20:04.229 "traddr": "10.0.0.1", 00:20:04.229 "trsvcid": "36762" 00:20:04.229 }, 00:20:04.229 "auth": { 00:20:04.229 "state": "completed", 00:20:04.229 "digest": "sha384", 00:20:04.229 "dhgroup": "ffdhe8192" 00:20:04.229 } 00:20:04.229 } 00:20:04.229 ]' 00:20:04.229 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.229 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.229 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.229 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.230 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.230 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.230 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.230 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.490 15:25:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:20:05.060 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.321 15:25:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.892 00:20:05.892 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.892 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.892 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.153 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.153 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.153 15:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.153 15:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.153 15:25:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.153 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.153 { 00:20:06.153 "cntlid": 91, 00:20:06.153 "qid": 0, 00:20:06.153 "state": "enabled", 00:20:06.153 "thread": "nvmf_tgt_poll_group_000", 00:20:06.153 "listen_address": { 00:20:06.153 "trtype": "TCP", 00:20:06.153 "adrfam": "IPv4", 00:20:06.153 "traddr": "10.0.0.2", 00:20:06.153 "trsvcid": "4420" 00:20:06.153 }, 00:20:06.153 "peer_address": { 00:20:06.153 "trtype": "TCP", 00:20:06.153 "adrfam": "IPv4", 00:20:06.153 "traddr": "10.0.0.1", 00:20:06.153 "trsvcid": "36798" 00:20:06.153 }, 00:20:06.153 "auth": { 00:20:06.153 "state": "completed", 00:20:06.153 "digest": "sha384", 00:20:06.153 "dhgroup": "ffdhe8192" 00:20:06.153 } 00:20:06.153 } 00:20:06.153 ]' 00:20:06.153 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.153 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:06.153 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.153 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:06.153 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.153 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.153 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.153 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.414 15:25:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:20:06.986 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.986 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:06.986 15:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.986 15:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.247 15:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.247 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.247 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:07.247 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:07.247 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:07.247 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.247 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.247 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:07.247 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:07.247 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.247 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.247 15:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.247 15:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.247 15:25:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.247 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.247 15:25:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.817 00:20:07.817 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.817 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.817 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.077 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.077 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.077 15:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.077 15:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.077 15:25:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.077 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.077 { 00:20:08.077 "cntlid": 93, 00:20:08.077 "qid": 0, 00:20:08.077 "state": "enabled", 00:20:08.077 "thread": "nvmf_tgt_poll_group_000", 00:20:08.077 "listen_address": { 00:20:08.077 "trtype": "TCP", 00:20:08.077 "adrfam": "IPv4", 00:20:08.077 "traddr": "10.0.0.2", 00:20:08.077 "trsvcid": "4420" 00:20:08.077 }, 00:20:08.077 "peer_address": { 00:20:08.077 "trtype": "TCP", 00:20:08.077 "adrfam": "IPv4", 00:20:08.077 "traddr": "10.0.0.1", 00:20:08.077 "trsvcid": "60172" 00:20:08.077 }, 00:20:08.077 "auth": { 00:20:08.077 "state": "completed", 00:20:08.077 "digest": "sha384", 00:20:08.077 "dhgroup": "ffdhe8192" 00:20:08.077 } 00:20:08.077 } 00:20:08.077 ]' 00:20:08.077 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.077 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:08.077 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.077 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:08.077 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.077 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.077 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.077 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.337 15:25:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:20:08.906 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.906 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:08.906 15:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.906 15:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.906 15:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.906 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.906 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:08.906 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:09.166 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:09.166 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.166 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.166 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:09.166 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:09.166 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.166 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:09.166 15:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.166 15:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.166 15:25:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.166 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.166 15:25:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.735 00:20:09.735 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.735 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.735 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.995 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.995 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.995 15:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.995 15:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.995 15:25:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.995 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.995 { 00:20:09.995 "cntlid": 95, 00:20:09.995 "qid": 0, 00:20:09.995 "state": "enabled", 00:20:09.995 "thread": "nvmf_tgt_poll_group_000", 00:20:09.995 "listen_address": { 00:20:09.995 "trtype": "TCP", 00:20:09.995 "adrfam": "IPv4", 00:20:09.995 "traddr": "10.0.0.2", 00:20:09.995 "trsvcid": "4420" 00:20:09.995 }, 00:20:09.995 "peer_address": { 00:20:09.995 "trtype": "TCP", 00:20:09.995 "adrfam": "IPv4", 00:20:09.995 "traddr": "10.0.0.1", 00:20:09.995 "trsvcid": "60206" 00:20:09.995 }, 00:20:09.995 "auth": { 00:20:09.995 "state": "completed", 00:20:09.995 "digest": "sha384", 00:20:09.995 "dhgroup": "ffdhe8192" 00:20:09.995 } 00:20:09.995 } 00:20:09.995 ]' 00:20:09.995 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.995 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.995 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.995 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:09.995 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.995 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.995 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.995 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.254 15:25:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.198 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.460 00:20:11.460 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.460 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.460 15:25:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.460 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.460 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.460 15:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.460 15:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.460 15:25:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.460 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.460 { 00:20:11.460 "cntlid": 97, 00:20:11.460 "qid": 0, 00:20:11.460 "state": "enabled", 00:20:11.460 "thread": "nvmf_tgt_poll_group_000", 00:20:11.460 "listen_address": { 00:20:11.460 "trtype": "TCP", 00:20:11.460 "adrfam": "IPv4", 00:20:11.460 "traddr": "10.0.0.2", 00:20:11.460 "trsvcid": "4420" 00:20:11.460 }, 00:20:11.460 "peer_address": { 00:20:11.460 "trtype": "TCP", 00:20:11.460 "adrfam": "IPv4", 00:20:11.460 "traddr": "10.0.0.1", 00:20:11.460 "trsvcid": "60232" 00:20:11.460 }, 00:20:11.460 "auth": { 00:20:11.460 "state": "completed", 00:20:11.460 "digest": "sha512", 00:20:11.460 "dhgroup": "null" 00:20:11.460 } 00:20:11.460 } 00:20:11.460 ]' 00:20:11.460 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.721 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:11.721 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.721 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:11.721 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.721 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.721 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.721 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.982 15:25:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:20:12.554 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.554 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:12.554 15:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.554 15:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.554 15:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.554 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.554 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:12.554 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:12.815 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:12.815 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.815 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:12.815 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:12.815 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:12.815 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.815 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.815 15:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.815 15:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.815 15:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.815 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.815 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.815 00:20:13.076 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.077 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.077 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.077 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.077 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.077 15:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.077 15:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.077 15:25:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.077 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.077 { 00:20:13.077 "cntlid": 99, 00:20:13.077 "qid": 0, 00:20:13.077 "state": "enabled", 00:20:13.077 "thread": "nvmf_tgt_poll_group_000", 00:20:13.077 "listen_address": { 00:20:13.077 "trtype": "TCP", 00:20:13.077 "adrfam": "IPv4", 00:20:13.077 "traddr": "10.0.0.2", 00:20:13.077 "trsvcid": "4420" 00:20:13.077 }, 00:20:13.077 "peer_address": { 00:20:13.077 "trtype": "TCP", 00:20:13.077 "adrfam": "IPv4", 00:20:13.077 "traddr": "10.0.0.1", 00:20:13.077 "trsvcid": "60258" 00:20:13.077 }, 00:20:13.077 "auth": { 00:20:13.077 "state": "completed", 00:20:13.077 "digest": "sha512", 00:20:13.077 "dhgroup": "null" 00:20:13.077 } 00:20:13.077 } 00:20:13.077 ]' 00:20:13.077 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.077 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:13.077 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.337 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:13.337 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.338 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.338 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.338 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.338 15:25:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.279 15:25:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.540 00:20:14.540 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.540 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.540 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.801 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.801 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.801 15:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.801 15:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.801 15:25:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.801 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.801 { 00:20:14.801 "cntlid": 101, 00:20:14.801 "qid": 0, 00:20:14.801 "state": "enabled", 00:20:14.801 "thread": "nvmf_tgt_poll_group_000", 00:20:14.801 "listen_address": { 00:20:14.801 "trtype": "TCP", 00:20:14.801 "adrfam": "IPv4", 00:20:14.801 "traddr": "10.0.0.2", 00:20:14.801 "trsvcid": "4420" 00:20:14.801 }, 00:20:14.801 "peer_address": { 00:20:14.801 "trtype": "TCP", 00:20:14.801 "adrfam": "IPv4", 00:20:14.801 "traddr": "10.0.0.1", 00:20:14.801 "trsvcid": "60286" 00:20:14.801 }, 00:20:14.801 "auth": { 00:20:14.801 "state": "completed", 00:20:14.801 "digest": "sha512", 00:20:14.801 "dhgroup": "null" 00:20:14.801 } 00:20:14.801 } 00:20:14.801 ]' 00:20:14.801 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.801 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.801 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.801 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:14.801 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.801 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.801 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.801 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.061 15:25:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:20:15.631 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.631 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:15.631 15:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.631 15:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.631 15:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.631 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.631 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:15.631 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:15.890 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:15.890 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.890 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:15.890 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:15.890 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:15.890 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.890 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:15.890 15:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.890 15:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.890 15:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.890 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.890 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.150 00:20:16.150 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.150 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.150 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.409 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.409 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.410 15:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.410 15:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.410 15:25:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.410 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.410 { 00:20:16.410 "cntlid": 103, 00:20:16.410 "qid": 0, 00:20:16.410 "state": "enabled", 00:20:16.410 "thread": "nvmf_tgt_poll_group_000", 00:20:16.410 "listen_address": { 00:20:16.410 "trtype": "TCP", 00:20:16.410 "adrfam": "IPv4", 00:20:16.410 "traddr": "10.0.0.2", 00:20:16.410 "trsvcid": "4420" 00:20:16.410 }, 00:20:16.410 "peer_address": { 00:20:16.410 "trtype": "TCP", 00:20:16.410 "adrfam": "IPv4", 00:20:16.410 "traddr": "10.0.0.1", 00:20:16.410 "trsvcid": "58064" 00:20:16.410 }, 00:20:16.410 "auth": { 00:20:16.410 "state": "completed", 00:20:16.410 "digest": "sha512", 00:20:16.410 "dhgroup": "null" 00:20:16.410 } 00:20:16.410 } 00:20:16.410 ]' 00:20:16.410 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.410 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.410 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.410 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:16.410 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.410 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.410 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.410 15:25:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.669 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:20:17.240 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.240 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:17.240 15:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.240 15:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.240 15:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.240 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.240 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.240 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:17.240 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:17.499 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:17.499 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.499 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:17.499 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:17.499 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:17.499 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.499 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.499 15:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.499 15:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.499 15:25:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.499 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.499 15:25:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.759 00:20:17.759 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.759 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.759 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.759 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.759 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.759 15:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.759 15:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.759 15:25:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.759 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.759 { 00:20:17.760 "cntlid": 105, 00:20:17.760 "qid": 0, 00:20:17.760 "state": "enabled", 00:20:17.760 "thread": "nvmf_tgt_poll_group_000", 00:20:17.760 "listen_address": { 00:20:17.760 "trtype": "TCP", 00:20:17.760 "adrfam": "IPv4", 00:20:17.760 "traddr": "10.0.0.2", 00:20:17.760 "trsvcid": "4420" 00:20:17.760 }, 00:20:17.760 "peer_address": { 00:20:17.760 "trtype": "TCP", 00:20:17.760 "adrfam": "IPv4", 00:20:17.760 "traddr": "10.0.0.1", 00:20:17.760 "trsvcid": "58108" 00:20:17.760 }, 00:20:17.760 "auth": { 00:20:17.760 "state": "completed", 00:20:17.760 "digest": "sha512", 00:20:17.760 "dhgroup": "ffdhe2048" 00:20:17.760 } 00:20:17.760 } 00:20:17.760 ]' 00:20:17.760 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.020 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.020 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.020 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:18.020 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.020 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.020 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.020 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.280 15:25:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:20:18.851 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.851 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:18.851 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.851 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.851 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.851 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.851 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:18.851 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:19.112 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:19.112 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.112 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:19.112 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:19.112 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:19.112 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.112 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.112 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.112 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.112 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.112 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.112 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.112 00:20:19.373 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.373 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.373 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.373 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.373 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.373 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.373 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.373 15:25:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.373 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.373 { 00:20:19.373 "cntlid": 107, 00:20:19.373 "qid": 0, 00:20:19.373 "state": "enabled", 00:20:19.373 "thread": "nvmf_tgt_poll_group_000", 00:20:19.373 "listen_address": { 00:20:19.373 "trtype": "TCP", 00:20:19.373 "adrfam": "IPv4", 00:20:19.373 "traddr": "10.0.0.2", 00:20:19.373 "trsvcid": "4420" 00:20:19.373 }, 00:20:19.373 "peer_address": { 00:20:19.373 "trtype": "TCP", 00:20:19.373 "adrfam": "IPv4", 00:20:19.373 "traddr": "10.0.0.1", 00:20:19.373 "trsvcid": "58136" 00:20:19.373 }, 00:20:19.373 "auth": { 00:20:19.373 "state": "completed", 00:20:19.373 "digest": "sha512", 00:20:19.373 "dhgroup": "ffdhe2048" 00:20:19.373 } 00:20:19.373 } 00:20:19.373 ]' 00:20:19.373 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.373 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.373 15:25:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.638 15:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:19.638 15:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.638 15:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.638 15:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.638 15:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.638 15:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:20:20.654 15:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.654 15:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:20.654 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.654 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.654 15:25:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.654 15:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.654 15:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:20.654 15:25:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:20.654 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:20.654 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.654 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:20.654 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:20.654 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:20.654 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.654 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.654 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.654 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.654 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.654 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.654 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.914 00:20:20.914 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.914 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.914 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.914 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.914 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.914 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.914 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.914 15:25:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.914 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.914 { 00:20:20.914 "cntlid": 109, 00:20:20.914 "qid": 0, 00:20:20.914 "state": "enabled", 00:20:20.914 "thread": "nvmf_tgt_poll_group_000", 00:20:20.914 "listen_address": { 00:20:20.914 "trtype": "TCP", 00:20:20.914 "adrfam": "IPv4", 00:20:20.914 "traddr": "10.0.0.2", 00:20:20.914 "trsvcid": "4420" 00:20:20.914 }, 00:20:20.914 "peer_address": { 00:20:20.914 "trtype": "TCP", 00:20:20.914 "adrfam": "IPv4", 00:20:20.914 "traddr": "10.0.0.1", 00:20:20.914 "trsvcid": "58170" 00:20:20.914 }, 00:20:20.914 "auth": { 00:20:20.914 "state": "completed", 00:20:20.914 "digest": "sha512", 00:20:20.914 "dhgroup": "ffdhe2048" 00:20:20.914 } 00:20:20.914 } 00:20:20.914 ]' 00:20:20.914 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.175 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.175 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.175 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.175 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.175 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.175 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.175 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.434 15:25:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:20:22.004 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.004 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:22.004 15:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.004 15:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.004 15:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.004 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.004 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:22.004 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:22.265 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:22.265 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.265 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:22.265 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:22.265 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:22.265 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.265 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:22.265 15:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.265 15:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.265 15:25:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.265 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:22.265 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:22.525 00:20:22.525 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.525 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.525 15:25:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.525 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.525 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.525 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.525 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.525 15:25:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.525 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.525 { 00:20:22.525 "cntlid": 111, 00:20:22.525 "qid": 0, 00:20:22.525 "state": "enabled", 00:20:22.525 "thread": "nvmf_tgt_poll_group_000", 00:20:22.525 "listen_address": { 00:20:22.525 "trtype": "TCP", 00:20:22.525 "adrfam": "IPv4", 00:20:22.525 "traddr": "10.0.0.2", 00:20:22.525 "trsvcid": "4420" 00:20:22.525 }, 00:20:22.525 "peer_address": { 00:20:22.525 "trtype": "TCP", 00:20:22.525 "adrfam": "IPv4", 00:20:22.525 "traddr": "10.0.0.1", 00:20:22.525 "trsvcid": "58188" 00:20:22.525 }, 00:20:22.525 "auth": { 00:20:22.525 "state": "completed", 00:20:22.525 "digest": "sha512", 00:20:22.525 "dhgroup": "ffdhe2048" 00:20:22.525 } 00:20:22.525 } 00:20:22.525 ]' 00:20:22.525 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.785 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:22.785 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.785 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:22.785 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.785 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.785 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.785 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.785 15:25:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.724 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.984 00:20:23.984 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.984 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.984 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.245 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.245 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.245 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.245 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.245 15:25:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.245 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.245 { 00:20:24.245 "cntlid": 113, 00:20:24.245 "qid": 0, 00:20:24.245 "state": "enabled", 00:20:24.245 "thread": "nvmf_tgt_poll_group_000", 00:20:24.245 "listen_address": { 00:20:24.245 "trtype": "TCP", 00:20:24.245 "adrfam": "IPv4", 00:20:24.245 "traddr": "10.0.0.2", 00:20:24.245 "trsvcid": "4420" 00:20:24.245 }, 00:20:24.245 "peer_address": { 00:20:24.245 "trtype": "TCP", 00:20:24.245 "adrfam": "IPv4", 00:20:24.245 "traddr": "10.0.0.1", 00:20:24.245 "trsvcid": "58222" 00:20:24.245 }, 00:20:24.245 "auth": { 00:20:24.245 "state": "completed", 00:20:24.245 "digest": "sha512", 00:20:24.245 "dhgroup": "ffdhe3072" 00:20:24.245 } 00:20:24.245 } 00:20:24.245 ]' 00:20:24.245 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.245 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.245 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.245 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:24.245 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.245 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.245 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.245 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.506 15:25:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:20:25.077 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.339 15:25:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.600 00:20:25.600 15:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.600 15:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.600 15:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.862 15:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.862 15:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.862 15:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.862 15:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.862 15:25:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.862 15:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.862 { 00:20:25.862 "cntlid": 115, 00:20:25.862 "qid": 0, 00:20:25.862 "state": "enabled", 00:20:25.862 "thread": "nvmf_tgt_poll_group_000", 00:20:25.862 "listen_address": { 00:20:25.862 "trtype": "TCP", 00:20:25.862 "adrfam": "IPv4", 00:20:25.862 "traddr": "10.0.0.2", 00:20:25.862 "trsvcid": "4420" 00:20:25.862 }, 00:20:25.862 "peer_address": { 00:20:25.862 "trtype": "TCP", 00:20:25.862 "adrfam": "IPv4", 00:20:25.862 "traddr": "10.0.0.1", 00:20:25.862 "trsvcid": "58254" 00:20:25.862 }, 00:20:25.862 "auth": { 00:20:25.862 "state": "completed", 00:20:25.862 "digest": "sha512", 00:20:25.862 "dhgroup": "ffdhe3072" 00:20:25.862 } 00:20:25.862 } 00:20:25.862 ]' 00:20:25.862 15:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.862 15:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.862 15:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.862 15:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:25.862 15:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.862 15:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.862 15:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.862 15:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.123 15:25:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:20:26.695 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.695 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:26.695 15:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.695 15:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.695 15:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.695 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.695 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:26.695 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:26.956 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:26.956 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.956 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:26.956 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:26.956 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:26.956 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.956 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.956 15:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.956 15:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.956 15:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.956 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.956 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.216 00:20:27.216 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.216 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.216 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.476 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.476 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.476 15:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.476 15:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.476 15:25:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.476 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.476 { 00:20:27.476 "cntlid": 117, 00:20:27.476 "qid": 0, 00:20:27.476 "state": "enabled", 00:20:27.476 "thread": "nvmf_tgt_poll_group_000", 00:20:27.476 "listen_address": { 00:20:27.476 "trtype": "TCP", 00:20:27.476 "adrfam": "IPv4", 00:20:27.476 "traddr": "10.0.0.2", 00:20:27.476 "trsvcid": "4420" 00:20:27.476 }, 00:20:27.476 "peer_address": { 00:20:27.476 "trtype": "TCP", 00:20:27.476 "adrfam": "IPv4", 00:20:27.476 "traddr": "10.0.0.1", 00:20:27.476 "trsvcid": "45658" 00:20:27.476 }, 00:20:27.476 "auth": { 00:20:27.476 "state": "completed", 00:20:27.476 "digest": "sha512", 00:20:27.476 "dhgroup": "ffdhe3072" 00:20:27.476 } 00:20:27.476 } 00:20:27.476 ]' 00:20:27.476 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.476 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.476 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.476 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:27.476 15:25:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.476 15:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.476 15:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.476 15:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.737 15:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:20:28.308 15:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.308 15:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:28.308 15:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.308 15:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.308 15:25:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.308 15:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.308 15:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:28.308 15:25:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:28.569 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:28.569 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.569 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:28.569 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:28.569 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:28.569 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.569 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:28.569 15:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.569 15:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.569 15:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.569 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.569 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:28.829 00:20:28.829 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.829 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.829 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.091 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.091 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.091 15:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.091 15:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.091 15:25:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.091 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.091 { 00:20:29.091 "cntlid": 119, 00:20:29.091 "qid": 0, 00:20:29.091 "state": "enabled", 00:20:29.091 "thread": "nvmf_tgt_poll_group_000", 00:20:29.091 "listen_address": { 00:20:29.091 "trtype": "TCP", 00:20:29.091 "adrfam": "IPv4", 00:20:29.091 "traddr": "10.0.0.2", 00:20:29.091 "trsvcid": "4420" 00:20:29.091 }, 00:20:29.091 "peer_address": { 00:20:29.091 "trtype": "TCP", 00:20:29.091 "adrfam": "IPv4", 00:20:29.091 "traddr": "10.0.0.1", 00:20:29.091 "trsvcid": "45682" 00:20:29.091 }, 00:20:29.091 "auth": { 00:20:29.091 "state": "completed", 00:20:29.091 "digest": "sha512", 00:20:29.091 "dhgroup": "ffdhe3072" 00:20:29.091 } 00:20:29.091 } 00:20:29.091 ]' 00:20:29.091 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.091 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.091 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.091 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.091 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.091 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.091 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.091 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.352 15:25:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:20:29.923 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.923 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:29.923 15:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.923 15:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.923 15:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.923 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.923 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.923 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:29.923 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:30.184 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:30.185 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.185 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:30.185 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:30.185 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:30.185 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.185 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.185 15:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.185 15:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.185 15:25:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.185 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.185 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.446 00:20:30.446 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.446 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.446 15:25:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.707 15:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.707 15:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.707 15:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.707 15:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.707 15:25:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.707 15:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.707 { 00:20:30.707 "cntlid": 121, 00:20:30.707 "qid": 0, 00:20:30.707 "state": "enabled", 00:20:30.707 "thread": "nvmf_tgt_poll_group_000", 00:20:30.707 "listen_address": { 00:20:30.707 "trtype": "TCP", 00:20:30.707 "adrfam": "IPv4", 00:20:30.707 "traddr": "10.0.0.2", 00:20:30.707 "trsvcid": "4420" 00:20:30.707 }, 00:20:30.707 "peer_address": { 00:20:30.707 "trtype": "TCP", 00:20:30.707 "adrfam": "IPv4", 00:20:30.707 "traddr": "10.0.0.1", 00:20:30.707 "trsvcid": "45714" 00:20:30.707 }, 00:20:30.707 "auth": { 00:20:30.707 "state": "completed", 00:20:30.707 "digest": "sha512", 00:20:30.707 "dhgroup": "ffdhe4096" 00:20:30.707 } 00:20:30.707 } 00:20:30.707 ]' 00:20:30.707 15:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.707 15:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.707 15:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.707 15:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:30.707 15:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.707 15:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.707 15:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.707 15:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.968 15:25:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:20:31.539 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.539 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:31.539 15:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.539 15:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.539 15:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.539 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.539 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:31.539 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:31.798 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:31.798 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.798 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:31.798 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:31.798 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:31.798 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.798 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.798 15:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.799 15:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.799 15:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.799 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.799 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.059 00:20:32.059 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.059 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.059 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.320 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.320 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.320 15:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.320 15:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.320 15:25:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.320 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.320 { 00:20:32.320 "cntlid": 123, 00:20:32.320 "qid": 0, 00:20:32.320 "state": "enabled", 00:20:32.320 "thread": "nvmf_tgt_poll_group_000", 00:20:32.320 "listen_address": { 00:20:32.320 "trtype": "TCP", 00:20:32.320 "adrfam": "IPv4", 00:20:32.320 "traddr": "10.0.0.2", 00:20:32.320 "trsvcid": "4420" 00:20:32.320 }, 00:20:32.320 "peer_address": { 00:20:32.320 "trtype": "TCP", 00:20:32.320 "adrfam": "IPv4", 00:20:32.320 "traddr": "10.0.0.1", 00:20:32.320 "trsvcid": "45744" 00:20:32.320 }, 00:20:32.320 "auth": { 00:20:32.320 "state": "completed", 00:20:32.320 "digest": "sha512", 00:20:32.320 "dhgroup": "ffdhe4096" 00:20:32.320 } 00:20:32.320 } 00:20:32.320 ]' 00:20:32.320 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.320 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.320 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.320 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:32.320 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.320 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.320 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.320 15:25:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.580 15:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:20:33.150 15:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.411 15:25:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.671 00:20:33.671 15:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.671 15:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.671 15:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.932 15:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.932 15:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.932 15:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.932 15:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.932 15:25:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.932 15:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.932 { 00:20:33.932 "cntlid": 125, 00:20:33.932 "qid": 0, 00:20:33.932 "state": "enabled", 00:20:33.932 "thread": "nvmf_tgt_poll_group_000", 00:20:33.932 "listen_address": { 00:20:33.932 "trtype": "TCP", 00:20:33.932 "adrfam": "IPv4", 00:20:33.932 "traddr": "10.0.0.2", 00:20:33.932 "trsvcid": "4420" 00:20:33.932 }, 00:20:33.932 "peer_address": { 00:20:33.932 "trtype": "TCP", 00:20:33.932 "adrfam": "IPv4", 00:20:33.932 "traddr": "10.0.0.1", 00:20:33.932 "trsvcid": "45776" 00:20:33.932 }, 00:20:33.932 "auth": { 00:20:33.932 "state": "completed", 00:20:33.932 "digest": "sha512", 00:20:33.932 "dhgroup": "ffdhe4096" 00:20:33.932 } 00:20:33.932 } 00:20:33.932 ]' 00:20:33.932 15:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.932 15:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.932 15:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.932 15:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.932 15:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.932 15:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.932 15:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.932 15:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.191 15:25:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.166 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.426 00:20:35.426 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.426 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.426 15:25:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.426 15:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.426 15:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.426 15:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.426 15:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.426 15:25:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.426 15:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.426 { 00:20:35.426 "cntlid": 127, 00:20:35.426 "qid": 0, 00:20:35.426 "state": "enabled", 00:20:35.426 "thread": "nvmf_tgt_poll_group_000", 00:20:35.426 "listen_address": { 00:20:35.426 "trtype": "TCP", 00:20:35.426 "adrfam": "IPv4", 00:20:35.426 "traddr": "10.0.0.2", 00:20:35.426 "trsvcid": "4420" 00:20:35.426 }, 00:20:35.426 "peer_address": { 00:20:35.426 "trtype": "TCP", 00:20:35.426 "adrfam": "IPv4", 00:20:35.426 "traddr": "10.0.0.1", 00:20:35.426 "trsvcid": "45810" 00:20:35.426 }, 00:20:35.426 "auth": { 00:20:35.426 "state": "completed", 00:20:35.426 "digest": "sha512", 00:20:35.426 "dhgroup": "ffdhe4096" 00:20:35.426 } 00:20:35.426 } 00:20:35.426 ]' 00:20:35.426 15:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.687 15:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.687 15:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.687 15:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.687 15:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.687 15:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.687 15:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.687 15:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.949 15:25:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:20:36.520 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.520 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:36.520 15:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.520 15:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.520 15:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.520 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.520 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.520 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:36.520 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:36.808 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:36.808 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.808 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:36.808 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:36.808 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:36.808 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.808 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.808 15:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.808 15:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.808 15:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.808 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.808 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.069 00:20:37.069 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.069 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.069 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.329 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.329 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.329 15:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.329 15:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.329 15:25:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.329 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.329 { 00:20:37.329 "cntlid": 129, 00:20:37.329 "qid": 0, 00:20:37.329 "state": "enabled", 00:20:37.329 "thread": "nvmf_tgt_poll_group_000", 00:20:37.329 "listen_address": { 00:20:37.329 "trtype": "TCP", 00:20:37.329 "adrfam": "IPv4", 00:20:37.329 "traddr": "10.0.0.2", 00:20:37.329 "trsvcid": "4420" 00:20:37.329 }, 00:20:37.329 "peer_address": { 00:20:37.329 "trtype": "TCP", 00:20:37.329 "adrfam": "IPv4", 00:20:37.329 "traddr": "10.0.0.1", 00:20:37.329 "trsvcid": "43146" 00:20:37.329 }, 00:20:37.329 "auth": { 00:20:37.329 "state": "completed", 00:20:37.329 "digest": "sha512", 00:20:37.329 "dhgroup": "ffdhe6144" 00:20:37.329 } 00:20:37.329 } 00:20:37.329 ]' 00:20:37.329 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.329 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:37.329 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.329 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.329 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.329 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.329 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.329 15:25:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.591 15:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:20:38.161 15:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.161 15:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:38.161 15:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.161 15:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.421 15:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.421 15:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.421 15:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:38.421 15:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:38.421 15:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:38.421 15:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.421 15:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.421 15:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:38.421 15:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:38.421 15:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.421 15:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.421 15:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.421 15:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.421 15:25:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.421 15:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.421 15:25:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.993 00:20:38.993 15:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.993 15:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.993 15:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.993 15:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.993 15:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.993 15:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.993 15:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.993 15:25:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.993 15:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.993 { 00:20:38.993 "cntlid": 131, 00:20:38.993 "qid": 0, 00:20:38.993 "state": "enabled", 00:20:38.993 "thread": "nvmf_tgt_poll_group_000", 00:20:38.993 "listen_address": { 00:20:38.993 "trtype": "TCP", 00:20:38.993 "adrfam": "IPv4", 00:20:38.993 "traddr": "10.0.0.2", 00:20:38.993 "trsvcid": "4420" 00:20:38.993 }, 00:20:38.993 "peer_address": { 00:20:38.993 "trtype": "TCP", 00:20:38.993 "adrfam": "IPv4", 00:20:38.993 "traddr": "10.0.0.1", 00:20:38.993 "trsvcid": "43170" 00:20:38.993 }, 00:20:38.993 "auth": { 00:20:38.993 "state": "completed", 00:20:38.993 "digest": "sha512", 00:20:38.993 "dhgroup": "ffdhe6144" 00:20:38.993 } 00:20:38.993 } 00:20:38.993 ]' 00:20:38.993 15:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.993 15:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.993 15:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.993 15:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:38.993 15:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.269 15:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.269 15:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.269 15:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.269 15:25:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.212 15:25:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.473 00:20:40.473 15:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.473 15:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.473 15:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.732 15:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.732 15:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.732 15:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.732 15:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.732 15:25:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.732 15:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.732 { 00:20:40.732 "cntlid": 133, 00:20:40.732 "qid": 0, 00:20:40.732 "state": "enabled", 00:20:40.732 "thread": "nvmf_tgt_poll_group_000", 00:20:40.732 "listen_address": { 00:20:40.732 "trtype": "TCP", 00:20:40.732 "adrfam": "IPv4", 00:20:40.732 "traddr": "10.0.0.2", 00:20:40.732 "trsvcid": "4420" 00:20:40.732 }, 00:20:40.732 "peer_address": { 00:20:40.732 "trtype": "TCP", 00:20:40.732 "adrfam": "IPv4", 00:20:40.732 "traddr": "10.0.0.1", 00:20:40.732 "trsvcid": "43194" 00:20:40.732 }, 00:20:40.732 "auth": { 00:20:40.732 "state": "completed", 00:20:40.732 "digest": "sha512", 00:20:40.732 "dhgroup": "ffdhe6144" 00:20:40.732 } 00:20:40.732 } 00:20:40.732 ]' 00:20:40.732 15:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.732 15:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.732 15:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.732 15:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:40.732 15:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.992 15:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.992 15:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.992 15:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.992 15:25:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.935 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.506 00:20:42.506 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.506 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.506 15:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.506 15:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.506 15:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.506 15:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.506 15:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.507 15:25:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.507 15:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.507 { 00:20:42.507 "cntlid": 135, 00:20:42.507 "qid": 0, 00:20:42.507 "state": "enabled", 00:20:42.507 "thread": "nvmf_tgt_poll_group_000", 00:20:42.507 "listen_address": { 00:20:42.507 "trtype": "TCP", 00:20:42.507 "adrfam": "IPv4", 00:20:42.507 "traddr": "10.0.0.2", 00:20:42.507 "trsvcid": "4420" 00:20:42.507 }, 00:20:42.507 "peer_address": { 00:20:42.507 "trtype": "TCP", 00:20:42.507 "adrfam": "IPv4", 00:20:42.507 "traddr": "10.0.0.1", 00:20:42.507 "trsvcid": "43216" 00:20:42.507 }, 00:20:42.507 "auth": { 00:20:42.507 "state": "completed", 00:20:42.507 "digest": "sha512", 00:20:42.507 "dhgroup": "ffdhe6144" 00:20:42.507 } 00:20:42.507 } 00:20:42.507 ]' 00:20:42.507 15:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.507 15:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.507 15:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.507 15:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.507 15:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.767 15:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.767 15:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.767 15:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.767 15:25:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.709 15:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.710 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.710 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.278 00:20:44.278 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.278 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.278 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.538 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.538 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.538 15:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.538 15:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.538 15:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.538 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.538 { 00:20:44.538 "cntlid": 137, 00:20:44.538 "qid": 0, 00:20:44.538 "state": "enabled", 00:20:44.538 "thread": "nvmf_tgt_poll_group_000", 00:20:44.538 "listen_address": { 00:20:44.538 "trtype": "TCP", 00:20:44.538 "adrfam": "IPv4", 00:20:44.538 "traddr": "10.0.0.2", 00:20:44.538 "trsvcid": "4420" 00:20:44.538 }, 00:20:44.538 "peer_address": { 00:20:44.538 "trtype": "TCP", 00:20:44.538 "adrfam": "IPv4", 00:20:44.538 "traddr": "10.0.0.1", 00:20:44.538 "trsvcid": "43246" 00:20:44.538 }, 00:20:44.538 "auth": { 00:20:44.538 "state": "completed", 00:20:44.538 "digest": "sha512", 00:20:44.538 "dhgroup": "ffdhe8192" 00:20:44.538 } 00:20:44.538 } 00:20:44.538 ]' 00:20:44.538 15:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.538 15:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:44.538 15:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.538 15:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.538 15:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.538 15:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.538 15:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.538 15:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.797 15:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:20:45.366 15:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.366 15:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:45.366 15:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.366 15:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.366 15:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.366 15:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.366 15:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:45.366 15:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:45.626 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:45.626 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.626 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:45.626 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:45.626 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:45.626 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.626 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.626 15:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.626 15:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.626 15:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.626 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.626 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.196 00:20:46.196 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.196 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.196 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.455 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.455 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.455 15:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.455 15:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.455 15:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.455 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.455 { 00:20:46.455 "cntlid": 139, 00:20:46.455 "qid": 0, 00:20:46.455 "state": "enabled", 00:20:46.455 "thread": "nvmf_tgt_poll_group_000", 00:20:46.455 "listen_address": { 00:20:46.455 "trtype": "TCP", 00:20:46.455 "adrfam": "IPv4", 00:20:46.455 "traddr": "10.0.0.2", 00:20:46.455 "trsvcid": "4420" 00:20:46.455 }, 00:20:46.455 "peer_address": { 00:20:46.455 "trtype": "TCP", 00:20:46.455 "adrfam": "IPv4", 00:20:46.455 "traddr": "10.0.0.1", 00:20:46.455 "trsvcid": "52500" 00:20:46.455 }, 00:20:46.455 "auth": { 00:20:46.455 "state": "completed", 00:20:46.455 "digest": "sha512", 00:20:46.456 "dhgroup": "ffdhe8192" 00:20:46.456 } 00:20:46.456 } 00:20:46.456 ]' 00:20:46.456 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.456 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:46.456 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.456 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:46.456 15:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.456 15:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.456 15:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.456 15:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.715 15:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:01:OWEwZjUxY2U0MzhiMmI5N2UyM2U3MWM3NDgzNmNmNjkcbTBg: --dhchap-ctrl-secret DHHC-1:02:YWY4NDAyZWFjMjkwMzU5MmRmOGJlYjA0NTliODhkNDgzZmI0YzI3OWNlMGUyNzRh3n9Fdw==: 00:20:47.285 15:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.546 15:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:47.546 15:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.546 15:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.546 15:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.546 15:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.546 15:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:47.546 15:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:47.546 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:47.546 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.546 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:47.546 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:47.546 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:47.546 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.546 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.546 15:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.546 15:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.546 15:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.546 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.546 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.120 00:20:48.120 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.120 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.120 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.381 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.381 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.381 15:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.381 15:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.381 15:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.381 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.381 { 00:20:48.381 "cntlid": 141, 00:20:48.381 "qid": 0, 00:20:48.381 "state": "enabled", 00:20:48.381 "thread": "nvmf_tgt_poll_group_000", 00:20:48.381 "listen_address": { 00:20:48.381 "trtype": "TCP", 00:20:48.381 "adrfam": "IPv4", 00:20:48.381 "traddr": "10.0.0.2", 00:20:48.381 "trsvcid": "4420" 00:20:48.381 }, 00:20:48.381 "peer_address": { 00:20:48.381 "trtype": "TCP", 00:20:48.381 "adrfam": "IPv4", 00:20:48.381 "traddr": "10.0.0.1", 00:20:48.381 "trsvcid": "52532" 00:20:48.381 }, 00:20:48.381 "auth": { 00:20:48.381 "state": "completed", 00:20:48.381 "digest": "sha512", 00:20:48.381 "dhgroup": "ffdhe8192" 00:20:48.381 } 00:20:48.381 } 00:20:48.381 ]' 00:20:48.381 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.381 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:48.381 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.381 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.381 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.381 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.381 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.381 15:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.641 15:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:02:ZTM4OTQxYzAyNDM1NGNkNTkzMWQ3OWUwNDFmYmJiZDlhOWIyNmUwZTM1MmIxOTRkTHZW9A==: --dhchap-ctrl-secret DHHC-1:01:NjVlZjIzMGJjMmU2ZTdlYjlkNDViMDQ4Mzk2NDliMTUfUhZP: 00:20:49.213 15:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.213 15:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:49.213 15:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.213 15:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.474 15:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.474 15:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.474 15:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:49.474 15:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:49.474 15:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:49.474 15:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.474 15:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:49.474 15:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:49.474 15:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:49.474 15:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.474 15:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:49.474 15:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.474 15:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.475 15:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.475 15:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.475 15:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:50.048 00:20:50.048 15:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.048 15:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.048 15:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.309 15:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.309 15:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.309 15:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.309 15:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.309 15:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.309 15:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.309 { 00:20:50.309 "cntlid": 143, 00:20:50.309 "qid": 0, 00:20:50.309 "state": "enabled", 00:20:50.309 "thread": "nvmf_tgt_poll_group_000", 00:20:50.309 "listen_address": { 00:20:50.309 "trtype": "TCP", 00:20:50.309 "adrfam": "IPv4", 00:20:50.309 "traddr": "10.0.0.2", 00:20:50.309 "trsvcid": "4420" 00:20:50.309 }, 00:20:50.309 "peer_address": { 00:20:50.309 "trtype": "TCP", 00:20:50.309 "adrfam": "IPv4", 00:20:50.309 "traddr": "10.0.0.1", 00:20:50.309 "trsvcid": "52562" 00:20:50.309 }, 00:20:50.309 "auth": { 00:20:50.309 "state": "completed", 00:20:50.309 "digest": "sha512", 00:20:50.309 "dhgroup": "ffdhe8192" 00:20:50.309 } 00:20:50.309 } 00:20:50.309 ]' 00:20:50.309 15:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.309 15:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.309 15:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.309 15:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.309 15:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.309 15:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.309 15:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.309 15:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.569 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:20:51.142 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.142 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:51.142 15:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.142 15:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.142 15:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.142 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:51.142 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:51.142 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:51.142 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:51.142 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:51.142 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:51.403 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:51.403 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.403 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:51.403 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:51.403 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:51.403 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.403 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.403 15:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.403 15:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.403 15:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.403 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.403 15:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.975 00:20:51.975 15:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.975 15:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.975 15:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.236 15:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.236 15:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.236 15:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.236 15:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.236 15:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.236 15:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.236 { 00:20:52.236 "cntlid": 145, 00:20:52.236 "qid": 0, 00:20:52.236 "state": "enabled", 00:20:52.236 "thread": "nvmf_tgt_poll_group_000", 00:20:52.236 "listen_address": { 00:20:52.236 "trtype": "TCP", 00:20:52.236 "adrfam": "IPv4", 00:20:52.236 "traddr": "10.0.0.2", 00:20:52.236 "trsvcid": "4420" 00:20:52.236 }, 00:20:52.236 "peer_address": { 00:20:52.236 "trtype": "TCP", 00:20:52.236 "adrfam": "IPv4", 00:20:52.236 "traddr": "10.0.0.1", 00:20:52.236 "trsvcid": "52576" 00:20:52.236 }, 00:20:52.236 "auth": { 00:20:52.236 "state": "completed", 00:20:52.236 "digest": "sha512", 00:20:52.236 "dhgroup": "ffdhe8192" 00:20:52.236 } 00:20:52.236 } 00:20:52.236 ]' 00:20:52.236 15:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.236 15:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.236 15:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.236 15:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:52.236 15:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.236 15:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.236 15:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.236 15:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.497 15:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:00:MmQwZGY2NWY2YzNiY2YyZGE4NmRkNWU4ZDc2MDcyMWE5ZGRjZGEwM2ZlMWM0ODdlmgeElw==: --dhchap-ctrl-secret DHHC-1:03:OTUyMGZlZWExNzFmZjU3MDc4YWI0MTA0Yzg3ZDk2NzQzZThkZmExYzZiNDQ5NjlmNjNiNmJjNDZiZTViZWY2NnT73SM=: 00:20:53.069 15:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.069 15:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:53.069 15:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.069 15:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.069 15:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.069 15:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:20:53.069 15:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.069 15:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.069 15:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.069 15:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:53.069 15:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:53.069 15:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:53.069 15:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:53.070 15:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.070 15:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:53.070 15:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.070 15:26:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:53.070 15:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:53.641 request: 00:20:53.641 { 00:20:53.641 "name": "nvme0", 00:20:53.641 "trtype": "tcp", 00:20:53.641 "traddr": "10.0.0.2", 00:20:53.641 "adrfam": "ipv4", 00:20:53.641 "trsvcid": "4420", 00:20:53.641 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:53.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:53.641 "prchk_reftag": false, 00:20:53.641 "prchk_guard": false, 00:20:53.641 "hdgst": false, 00:20:53.641 "ddgst": false, 00:20:53.641 "dhchap_key": "key2", 00:20:53.641 "method": "bdev_nvme_attach_controller", 00:20:53.641 "req_id": 1 00:20:53.641 } 00:20:53.641 Got JSON-RPC error response 00:20:53.641 response: 00:20:53.641 { 00:20:53.641 "code": -5, 00:20:53.641 "message": "Input/output error" 00:20:53.641 } 00:20:53.641 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:53.641 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:53.641 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:53.641 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:53.641 15:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:53.641 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.641 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.641 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.641 15:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.641 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.641 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.641 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.642 15:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:53.642 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:53.642 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:53.642 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:53.642 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.642 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:53.642 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:53.642 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:53.642 15:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:54.218 request: 00:20:54.218 { 00:20:54.218 "name": "nvme0", 00:20:54.218 "trtype": "tcp", 00:20:54.218 "traddr": "10.0.0.2", 00:20:54.218 "adrfam": "ipv4", 00:20:54.218 "trsvcid": "4420", 00:20:54.218 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:54.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:54.218 "prchk_reftag": false, 00:20:54.218 "prchk_guard": false, 00:20:54.218 "hdgst": false, 00:20:54.218 "ddgst": false, 00:20:54.218 "dhchap_key": "key1", 00:20:54.218 "dhchap_ctrlr_key": "ckey2", 00:20:54.218 "method": "bdev_nvme_attach_controller", 00:20:54.218 "req_id": 1 00:20:54.218 } 00:20:54.218 Got JSON-RPC error response 00:20:54.218 response: 00:20:54.218 { 00:20:54.218 "code": -5, 00:20:54.218 "message": "Input/output error" 00:20:54.218 } 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key1 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.218 15:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.829 request: 00:20:54.829 { 00:20:54.829 "name": "nvme0", 00:20:54.829 "trtype": "tcp", 00:20:54.829 "traddr": "10.0.0.2", 00:20:54.829 "adrfam": "ipv4", 00:20:54.829 "trsvcid": "4420", 00:20:54.829 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:54.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:54.829 "prchk_reftag": false, 00:20:54.829 "prchk_guard": false, 00:20:54.829 "hdgst": false, 00:20:54.829 "ddgst": false, 00:20:54.829 "dhchap_key": "key1", 00:20:54.829 "dhchap_ctrlr_key": "ckey1", 00:20:54.829 "method": "bdev_nvme_attach_controller", 00:20:54.829 "req_id": 1 00:20:54.829 } 00:20:54.829 Got JSON-RPC error response 00:20:54.829 response: 00:20:54.829 { 00:20:54.829 "code": -5, 00:20:54.829 "message": "Input/output error" 00:20:54.829 } 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 698821 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 698821 ']' 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 698821 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 698821 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 698821' 00:20:54.829 killing process with pid 698821 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 698821 00:20:54.829 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 698821 00:20:55.089 15:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:55.089 15:26:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:55.089 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:55.089 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.089 15:26:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=724993 00:20:55.089 15:26:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 724993 00:20:55.089 15:26:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:55.089 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 724993 ']' 00:20:55.089 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.089 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.089 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.089 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.089 15:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 724993 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 724993 ']' 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.030 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.291 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.291 15:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:56.291 15:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.291 15:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:56.291 15:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:56.291 15:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:56.291 15:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.291 15:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:56.291 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.291 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.291 15:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.291 15:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.291 15:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.863 00:20:56.863 15:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.863 15:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.863 15:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.863 15:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.863 15:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.863 15:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.863 15:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.863 15:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.863 15:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.863 { 00:20:56.863 "cntlid": 1, 00:20:56.863 "qid": 0, 00:20:56.863 "state": "enabled", 00:20:56.863 "thread": "nvmf_tgt_poll_group_000", 00:20:56.863 "listen_address": { 00:20:56.863 "trtype": "TCP", 00:20:56.863 "adrfam": "IPv4", 00:20:56.863 "traddr": "10.0.0.2", 00:20:56.863 "trsvcid": "4420" 00:20:56.863 }, 00:20:56.863 "peer_address": { 00:20:56.863 "trtype": "TCP", 00:20:56.863 "adrfam": "IPv4", 00:20:56.863 "traddr": "10.0.0.1", 00:20:56.863 "trsvcid": "59244" 00:20:56.863 }, 00:20:56.863 "auth": { 00:20:56.863 "state": "completed", 00:20:56.863 "digest": "sha512", 00:20:56.863 "dhgroup": "ffdhe8192" 00:20:56.863 } 00:20:56.863 } 00:20:56.863 ]' 00:20:56.863 15:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.123 15:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.123 15:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.123 15:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.123 15:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.123 15:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.123 15:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.123 15:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.123 15:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid 008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-secret DHHC-1:03:MGMwOWQ1NjVlOTdhMDBjYjY3OWJlYTYwMGZiNzcwZWZkMGRlYTlkMGMyOGQ0ZGZlZWJmYWU0MWFkZmQyZWU0M9YulH8=: 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --dhchap-key key3 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.065 15:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.325 request: 00:20:58.325 { 00:20:58.325 "name": "nvme0", 00:20:58.325 "trtype": "tcp", 00:20:58.325 "traddr": "10.0.0.2", 00:20:58.325 "adrfam": "ipv4", 00:20:58.325 "trsvcid": "4420", 00:20:58.325 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:58.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:58.325 "prchk_reftag": false, 00:20:58.325 "prchk_guard": false, 00:20:58.325 "hdgst": false, 00:20:58.325 "ddgst": false, 00:20:58.325 "dhchap_key": "key3", 00:20:58.325 "method": "bdev_nvme_attach_controller", 00:20:58.325 "req_id": 1 00:20:58.325 } 00:20:58.325 Got JSON-RPC error response 00:20:58.325 response: 00:20:58.325 { 00:20:58.325 "code": -5, 00:20:58.325 "message": "Input/output error" 00:20:58.325 } 00:20:58.325 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:58.325 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:58.325 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:58.325 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:58.325 15:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:58.325 15:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:58.325 15:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:58.325 15:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:58.586 15:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.586 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:58.586 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.586 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:58.586 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:58.586 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:58.586 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:58.586 15:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.586 15:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.586 request: 00:20:58.586 { 00:20:58.586 "name": "nvme0", 00:20:58.586 "trtype": "tcp", 00:20:58.586 "traddr": "10.0.0.2", 00:20:58.586 "adrfam": "ipv4", 00:20:58.586 "trsvcid": "4420", 00:20:58.586 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:58.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:58.586 "prchk_reftag": false, 00:20:58.586 "prchk_guard": false, 00:20:58.586 "hdgst": false, 00:20:58.586 "ddgst": false, 00:20:58.586 "dhchap_key": "key3", 00:20:58.586 "method": "bdev_nvme_attach_controller", 00:20:58.586 "req_id": 1 00:20:58.586 } 00:20:58.586 Got JSON-RPC error response 00:20:58.586 response: 00:20:58.586 { 00:20:58.586 "code": -5, 00:20:58.586 "message": "Input/output error" 00:20:58.586 } 00:20:58.586 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:58.586 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:58.586 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:58.586 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:58.587 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:58.587 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:58.587 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:58.587 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:58.587 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:58.587 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:58.848 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:58.848 request: 00:20:58.848 { 00:20:58.848 "name": "nvme0", 00:20:58.848 "trtype": "tcp", 00:20:58.848 "traddr": "10.0.0.2", 00:20:58.848 "adrfam": "ipv4", 00:20:58.848 "trsvcid": "4420", 00:20:58.848 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:58.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6", 00:20:58.848 "prchk_reftag": false, 00:20:58.848 "prchk_guard": false, 00:20:58.848 "hdgst": false, 00:20:58.848 "ddgst": false, 00:20:58.848 "dhchap_key": "key0", 00:20:58.848 "dhchap_ctrlr_key": "key1", 00:20:58.848 "method": "bdev_nvme_attach_controller", 00:20:58.848 "req_id": 1 00:20:58.848 } 00:20:58.848 Got JSON-RPC error response 00:20:58.848 response: 00:20:58.848 { 00:20:58.848 "code": -5, 00:20:58.848 "message": "Input/output error" 00:20:58.848 } 00:20:59.109 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:59.109 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:59.109 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:59.109 15:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:59.109 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:59.109 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:59.109 00:20:59.109 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:59.109 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.109 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:59.370 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.370 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.370 15:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.631 15:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:59.631 15:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:59.631 15:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 698855 00:20:59.631 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 698855 ']' 00:20:59.631 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 698855 00:20:59.631 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:59.631 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.631 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 698855 00:20:59.631 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:59.631 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:59.631 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 698855' 00:20:59.631 killing process with pid 698855 00:20:59.631 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 698855 00:20:59.631 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 698855 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:59.892 rmmod nvme_tcp 00:20:59.892 rmmod nvme_fabrics 00:20:59.892 rmmod nvme_keyring 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 724993 ']' 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 724993 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 724993 ']' 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 724993 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 724993 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 724993' 00:20:59.892 killing process with pid 724993 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 724993 00:20:59.892 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 724993 00:21:00.153 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:00.153 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:00.153 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:00.153 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:00.153 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:00.153 15:26:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.153 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.153 15:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.066 15:26:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:02.066 15:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1iM /tmp/spdk.key-sha256.S5X /tmp/spdk.key-sha384.Xdf /tmp/spdk.key-sha512.w86 /tmp/spdk.key-sha512.Xhr /tmp/spdk.key-sha384.fgy /tmp/spdk.key-sha256.ckV '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:02.066 00:21:02.066 real 2m22.586s 00:21:02.066 user 5m16.470s 00:21:02.066 sys 0m19.250s 00:21:02.066 15:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:02.066 15:26:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.066 ************************************ 00:21:02.066 END TEST nvmf_auth_target 00:21:02.066 ************************************ 00:21:02.066 15:26:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:02.066 15:26:11 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:02.066 15:26:11 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:02.066 15:26:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:02.066 15:26:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:02.066 15:26:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:02.327 ************************************ 00:21:02.327 START TEST nvmf_bdevio_no_huge 00:21:02.327 ************************************ 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:02.327 * Looking for test storage... 00:21:02.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.327 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:02.328 15:26:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:10.471 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:10.471 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:10.471 Found net devices under 0000:31:00.0: cvl_0_0 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.471 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:10.472 Found net devices under 0000:31:00.1: cvl_0_1 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:10.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:21:10.472 00:21:10.472 --- 10.0.0.2 ping statistics --- 00:21:10.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.472 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:21:10.472 00:21:10.472 --- 10.0.0.1 ping statistics --- 00:21:10.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.472 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=730425 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 730425 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 730425 ']' 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:10.472 15:26:19 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:10.472 [2024-07-15 15:26:19.595750] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:10.472 [2024-07-15 15:26:19.595821] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:10.472 [2024-07-15 15:26:19.697774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.472 [2024-07-15 15:26:19.803587] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.472 [2024-07-15 15:26:19.803639] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.472 [2024-07-15 15:26:19.803647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.472 [2024-07-15 15:26:19.803654] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.472 [2024-07-15 15:26:19.803660] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.472 [2024-07-15 15:26:19.803834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:10.472 [2024-07-15 15:26:19.803996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:10.472 [2024-07-15 15:26:19.804328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:10.472 [2024-07-15 15:26:19.804331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.044 [2024-07-15 15:26:20.437746] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.044 Malloc0 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:11.044 [2024-07-15 15:26:20.491647] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:11.044 { 00:21:11.044 "params": { 00:21:11.044 "name": "Nvme$subsystem", 00:21:11.044 "trtype": "$TEST_TRANSPORT", 00:21:11.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.044 "adrfam": "ipv4", 00:21:11.044 "trsvcid": "$NVMF_PORT", 00:21:11.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.044 "hdgst": ${hdgst:-false}, 00:21:11.044 "ddgst": ${ddgst:-false} 00:21:11.044 }, 00:21:11.044 "method": "bdev_nvme_attach_controller" 00:21:11.044 } 00:21:11.044 EOF 00:21:11.044 )") 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:11.044 15:26:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:11.044 "params": { 00:21:11.044 "name": "Nvme1", 00:21:11.044 "trtype": "tcp", 00:21:11.044 "traddr": "10.0.0.2", 00:21:11.044 "adrfam": "ipv4", 00:21:11.044 "trsvcid": "4420", 00:21:11.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:11.045 "hdgst": false, 00:21:11.045 "ddgst": false 00:21:11.045 }, 00:21:11.045 "method": "bdev_nvme_attach_controller" 00:21:11.045 }' 00:21:11.045 [2024-07-15 15:26:20.555512] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:11.045 [2024-07-15 15:26:20.555602] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid730667 ] 00:21:11.045 [2024-07-15 15:26:20.628478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:11.305 [2024-07-15 15:26:20.726339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.305 [2024-07-15 15:26:20.726480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.305 [2024-07-15 15:26:20.726485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.566 I/O targets: 00:21:11.566 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:11.566 00:21:11.566 00:21:11.566 CUnit - A unit testing framework for C - Version 2.1-3 00:21:11.566 http://cunit.sourceforge.net/ 00:21:11.566 00:21:11.566 00:21:11.566 Suite: bdevio tests on: Nvme1n1 00:21:11.566 Test: blockdev write read block ...passed 00:21:11.566 Test: blockdev write zeroes read block ...passed 00:21:11.566 Test: blockdev write zeroes read no split ...passed 00:21:11.566 Test: blockdev write zeroes read split ...passed 00:21:11.566 Test: blockdev write zeroes read split partial ...passed 00:21:11.566 Test: blockdev reset ...[2024-07-15 15:26:21.160344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:11.566 [2024-07-15 15:26:21.160401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1736e20 (9): Bad file descriptor 00:21:11.826 [2024-07-15 15:26:21.221142] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:11.826 passed 00:21:11.826 Test: blockdev write read 8 blocks ...passed 00:21:11.826 Test: blockdev write read size > 128k ...passed 00:21:11.826 Test: blockdev write read invalid size ...passed 00:21:11.826 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:11.826 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:11.826 Test: blockdev write read max offset ...passed 00:21:11.826 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:11.826 Test: blockdev writev readv 8 blocks ...passed 00:21:11.826 Test: blockdev writev readv 30 x 1block ...passed 00:21:11.826 Test: blockdev writev readv block ...passed 00:21:12.086 Test: blockdev writev readv size > 128k ...passed 00:21:12.086 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:12.086 Test: blockdev comparev and writev ...[2024-07-15 15:26:21.487042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:12.086 [2024-07-15 15:26:21.487064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:12.086 [2024-07-15 15:26:21.487075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:12.086 [2024-07-15 15:26:21.487081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:12.086 [2024-07-15 15:26:21.487599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:12.086 [2024-07-15 15:26:21.487607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:12.086 [2024-07-15 15:26:21.487616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:12.086 [2024-07-15 15:26:21.487621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:12.086 [2024-07-15 15:26:21.488108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:12.086 [2024-07-15 15:26:21.488115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:12.086 [2024-07-15 15:26:21.488124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:12.086 [2024-07-15 15:26:21.488129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:12.086 [2024-07-15 15:26:21.488618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:12.086 [2024-07-15 15:26:21.488624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:12.086 [2024-07-15 15:26:21.488633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:12.086 [2024-07-15 15:26:21.488638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:12.086 passed 00:21:12.086 Test: blockdev nvme passthru rw ...passed 00:21:12.086 Test: blockdev nvme passthru vendor specific ...[2024-07-15 15:26:21.572754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:12.086 [2024-07-15 15:26:21.572764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:12.086 [2024-07-15 15:26:21.573116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:12.086 [2024-07-15 15:26:21.573123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:12.086 [2024-07-15 15:26:21.573456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:12.086 [2024-07-15 15:26:21.573463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:12.086 [2024-07-15 15:26:21.573800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:12.086 [2024-07-15 15:26:21.573808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:12.086 passed 00:21:12.086 Test: blockdev nvme admin passthru ...passed 00:21:12.086 Test: blockdev copy ...passed 00:21:12.086 00:21:12.086 Run Summary: Type Total Ran Passed Failed Inactive 00:21:12.086 suites 1 1 n/a 0 0 00:21:12.086 tests 23 23 23 0 0 00:21:12.086 asserts 152 152 152 0 n/a 00:21:12.086 00:21:12.086 Elapsed time = 1.281 seconds 00:21:12.348 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:12.348 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.348 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:12.348 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.348 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:12.348 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:12.348 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:12.348 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:12.348 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:12.348 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:12.348 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:12.348 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:12.348 rmmod nvme_tcp 00:21:12.348 rmmod nvme_fabrics 00:21:12.348 rmmod nvme_keyring 00:21:12.609 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:12.609 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:12.609 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:12.609 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 730425 ']' 00:21:12.609 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 730425 00:21:12.609 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 730425 ']' 00:21:12.609 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 730425 00:21:12.609 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:21:12.609 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:12.609 15:26:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 730425 00:21:12.609 15:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:21:12.609 15:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:21:12.609 15:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 730425' 00:21:12.609 killing process with pid 730425 00:21:12.609 15:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 730425 00:21:12.609 15:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 730425 00:21:12.870 15:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:12.870 15:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:12.870 15:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:12.870 15:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:12.870 15:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:12.870 15:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.870 15:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:12.870 15:26:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.410 15:26:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:15.410 00:21:15.410 real 0m12.765s 00:21:15.410 user 0m14.579s 00:21:15.410 sys 0m6.730s 00:21:15.410 15:26:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:15.410 15:26:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:15.410 ************************************ 00:21:15.410 END TEST nvmf_bdevio_no_huge 00:21:15.410 ************************************ 00:21:15.410 15:26:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:15.410 15:26:24 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:15.410 15:26:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:15.410 15:26:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:15.410 15:26:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:15.410 ************************************ 00:21:15.410 START TEST nvmf_tls 00:21:15.410 ************************************ 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:15.410 * Looking for test storage... 00:21:15.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:15.410 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:15.411 15:26:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:23.606 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:23.607 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:23.607 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:23.607 Found net devices under 0000:31:00.0: cvl_0_0 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:23.607 Found net devices under 0000:31:00.1: cvl_0_1 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:23.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:21:23.607 00:21:23.607 --- 10.0.0.2 ping statistics --- 00:21:23.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.607 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:21:23.607 00:21:23.607 --- 10.0.0.1 ping statistics --- 00:21:23.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.607 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=735342 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 735342 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 735342 ']' 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.607 15:26:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.607 [2024-07-15 15:26:32.605079] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:23.607 [2024-07-15 15:26:32.605144] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.607 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.607 [2024-07-15 15:26:32.683700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.607 [2024-07-15 15:26:32.757644] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.607 [2024-07-15 15:26:32.757682] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.607 [2024-07-15 15:26:32.757689] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.607 [2024-07-15 15:26:32.757696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.607 [2024-07-15 15:26:32.757702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.607 [2024-07-15 15:26:32.757726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.867 15:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.867 15:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:23.867 15:26:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.867 15:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:23.867 15:26:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.867 15:26:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.867 15:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:23.867 15:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:24.128 true 00:21:24.128 15:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:24.128 15:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:24.128 15:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:24.128 15:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:24.128 15:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:24.389 15:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:24.389 15:26:33 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:24.650 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:24.650 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:24.650 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:24.650 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:24.650 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:24.910 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:24.910 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:24.910 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:24.910 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:24.910 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:24.910 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:24.910 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:25.171 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:25.171 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:25.431 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:25.431 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:25.431 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:25.431 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:25.431 15:26:34 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:25.691 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.ueqrLfRTRa 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.ahcPf5Me31 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.ueqrLfRTRa 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ahcPf5Me31 00:21:25.692 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:25.952 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:25.952 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.ueqrLfRTRa 00:21:25.952 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ueqrLfRTRa 00:21:25.952 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:26.212 [2024-07-15 15:26:35.705479] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.212 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:26.472 15:26:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:26.472 [2024-07-15 15:26:36.014248] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:26.472 [2024-07-15 15:26:36.014465] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.472 15:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:26.733 malloc0 00:21:26.733 15:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:26.733 15:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ueqrLfRTRa 00:21:26.993 [2024-07-15 15:26:36.478250] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:26.993 15:26:36 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ueqrLfRTRa 00:21:26.993 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.993 Initializing NVMe Controllers 00:21:36.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:36.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:36.993 Initialization complete. Launching workers. 00:21:36.993 ======================================================== 00:21:36.993 Latency(us) 00:21:36.993 Device Information : IOPS MiB/s Average min max 00:21:36.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13459.65 52.58 4755.51 1030.70 7395.63 00:21:36.993 ======================================================== 00:21:36.993 Total : 13459.65 52.58 4755.51 1030.70 7395.63 00:21:36.993 00:21:36.993 15:26:46 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ueqrLfRTRa 00:21:36.993 15:26:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:36.993 15:26:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:36.993 15:26:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:36.993 15:26:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ueqrLfRTRa' 00:21:36.993 15:26:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:36.993 15:26:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=738124 00:21:36.993 15:26:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:36.993 15:26:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 738124 /var/tmp/bdevperf.sock 00:21:36.993 15:26:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:36.993 15:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 738124 ']' 00:21:36.993 15:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.993 15:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.993 15:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.993 15:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.993 15:26:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.254 [2024-07-15 15:26:46.652579] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:37.254 [2024-07-15 15:26:46.652634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid738124 ] 00:21:37.254 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.254 [2024-07-15 15:26:46.704596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.254 [2024-07-15 15:26:46.756973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.826 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.826 15:26:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:37.827 15:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ueqrLfRTRa 00:21:38.087 [2024-07-15 15:26:47.537709] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:38.087 [2024-07-15 15:26:47.537760] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:38.087 TLSTESTn1 00:21:38.087 15:26:47 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:38.347 Running I/O for 10 seconds... 00:21:48.349 00:21:48.349 Latency(us) 00:21:48.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.349 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:48.349 Verification LBA range: start 0x0 length 0x2000 00:21:48.349 TLSTESTn1 : 10.02 5267.26 20.58 0.00 0.00 24256.53 5761.71 62914.56 00:21:48.349 =================================================================================================================== 00:21:48.349 Total : 5267.26 20.58 0.00 0.00 24256.53 5761.71 62914.56 00:21:48.349 0 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 738124 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 738124 ']' 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 738124 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 738124 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 738124' 00:21:48.349 killing process with pid 738124 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 738124 00:21:48.349 Received shutdown signal, test time was about 10.000000 seconds 00:21:48.349 00:21:48.349 Latency(us) 00:21:48.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.349 =================================================================================================================== 00:21:48.349 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:48.349 [2024-07-15 15:26:57.838786] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 738124 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ahcPf5Me31 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ahcPf5Me31 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ahcPf5Me31 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ahcPf5Me31' 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=740407 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 740407 /var/tmp/bdevperf.sock 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 740407 ']' 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:48.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.349 15:26:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.609 [2024-07-15 15:26:58.014723] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:48.609 [2024-07-15 15:26:58.014777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid740407 ] 00:21:48.609 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.609 [2024-07-15 15:26:58.068818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.609 [2024-07-15 15:26:58.118730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.179 15:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:49.179 15:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:49.179 15:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ahcPf5Me31 00:21:49.440 [2024-07-15 15:26:58.911760] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:49.440 [2024-07-15 15:26:58.911825] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:49.440 [2024-07-15 15:26:58.923497] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:49.440 [2024-07-15 15:26:58.923766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c1000 (107): Transport endpoint is not connected 00:21:49.440 [2024-07-15 15:26:58.924760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c1000 (9): Bad file descriptor 00:21:49.440 [2024-07-15 15:26:58.925762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.440 [2024-07-15 15:26:58.925769] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:49.440 [2024-07-15 15:26:58.925776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.440 request: 00:21:49.440 { 00:21:49.440 "name": "TLSTEST", 00:21:49.440 "trtype": "tcp", 00:21:49.440 "traddr": "10.0.0.2", 00:21:49.440 "adrfam": "ipv4", 00:21:49.440 "trsvcid": "4420", 00:21:49.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.440 "prchk_reftag": false, 00:21:49.440 "prchk_guard": false, 00:21:49.440 "hdgst": false, 00:21:49.440 "ddgst": false, 00:21:49.440 "psk": "/tmp/tmp.ahcPf5Me31", 00:21:49.440 "method": "bdev_nvme_attach_controller", 00:21:49.440 "req_id": 1 00:21:49.440 } 00:21:49.440 Got JSON-RPC error response 00:21:49.440 response: 00:21:49.440 { 00:21:49.440 "code": -5, 00:21:49.440 "message": "Input/output error" 00:21:49.440 } 00:21:49.440 15:26:58 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 740407 00:21:49.440 15:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 740407 ']' 00:21:49.440 15:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 740407 00:21:49.440 15:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:49.440 15:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:49.440 15:26:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 740407 00:21:49.440 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:49.440 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:49.440 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 740407' 00:21:49.440 killing process with pid 740407 00:21:49.440 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 740407 00:21:49.440 Received shutdown signal, test time was about 10.000000 seconds 00:21:49.440 00:21:49.440 Latency(us) 00:21:49.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.440 =================================================================================================================== 00:21:49.440 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:49.440 [2024-07-15 15:26:59.013520] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:49.440 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 740407 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ueqrLfRTRa 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ueqrLfRTRa 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ueqrLfRTRa 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ueqrLfRTRa' 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=740614 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 740614 /var/tmp/bdevperf.sock 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 740614 ']' 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.701 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.701 [2024-07-15 15:26:59.172949] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:49.701 [2024-07-15 15:26:59.173004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid740614 ] 00:21:49.701 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.701 [2024-07-15 15:26:59.227051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.701 [2024-07-15 15:26:59.279537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.643 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.643 15:26:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:50.643 15:26:59 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.ueqrLfRTRa 00:21:50.643 [2024-07-15 15:27:00.076409] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:50.643 [2024-07-15 15:27:00.076477] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:50.643 [2024-07-15 15:27:00.086292] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:50.643 [2024-07-15 15:27:00.086316] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:50.643 [2024-07-15 15:27:00.086343] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:50.643 [2024-07-15 15:27:00.087456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1037000 (107): Transport endpoint is not connected 00:21:50.643 [2024-07-15 15:27:00.088451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1037000 (9): Bad file descriptor 00:21:50.643 [2024-07-15 15:27:00.089452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.643 [2024-07-15 15:27:00.089459] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:50.643 [2024-07-15 15:27:00.089466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.643 request: 00:21:50.643 { 00:21:50.643 "name": "TLSTEST", 00:21:50.643 "trtype": "tcp", 00:21:50.643 "traddr": "10.0.0.2", 00:21:50.643 "adrfam": "ipv4", 00:21:50.643 "trsvcid": "4420", 00:21:50.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.643 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:50.643 "prchk_reftag": false, 00:21:50.643 "prchk_guard": false, 00:21:50.643 "hdgst": false, 00:21:50.643 "ddgst": false, 00:21:50.643 "psk": "/tmp/tmp.ueqrLfRTRa", 00:21:50.643 "method": "bdev_nvme_attach_controller", 00:21:50.643 "req_id": 1 00:21:50.643 } 00:21:50.643 Got JSON-RPC error response 00:21:50.643 response: 00:21:50.643 { 00:21:50.643 "code": -5, 00:21:50.643 "message": "Input/output error" 00:21:50.643 } 00:21:50.643 15:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 740614 00:21:50.643 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 740614 ']' 00:21:50.643 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 740614 00:21:50.643 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:50.643 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:50.643 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 740614 00:21:50.643 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:50.643 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:50.643 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 740614' 00:21:50.643 killing process with pid 740614 00:21:50.643 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 740614 00:21:50.643 Received shutdown signal, test time was about 10.000000 seconds 00:21:50.643 00:21:50.643 Latency(us) 00:21:50.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.643 =================================================================================================================== 00:21:50.643 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:50.643 [2024-07-15 15:27:00.175752] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:50.643 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 740614 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ueqrLfRTRa 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ueqrLfRTRa 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ueqrLfRTRa 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ueqrLfRTRa' 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=740789 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 740789 /var/tmp/bdevperf.sock 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 740789 ']' 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.904 15:27:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.904 [2024-07-15 15:27:00.334062] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:50.904 [2024-07-15 15:27:00.334120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid740789 ] 00:21:50.904 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.904 [2024-07-15 15:27:00.386364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.904 [2024-07-15 15:27:00.438550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.475 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.475 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:51.475 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ueqrLfRTRa 00:21:51.736 [2024-07-15 15:27:01.235547] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.736 [2024-07-15 15:27:01.235601] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:51.736 [2024-07-15 15:27:01.239670] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:51.736 [2024-07-15 15:27:01.239692] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:51.736 [2024-07-15 15:27:01.239716] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:51.736 [2024-07-15 15:27:01.240356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb9000 (107): Transport endpoint is not connected 00:21:51.736 [2024-07-15 15:27:01.241350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb9000 (9): Bad file descriptor 00:21:51.736 [2024-07-15 15:27:01.242352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:51.736 [2024-07-15 15:27:01.242358] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:51.736 [2024-07-15 15:27:01.242365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:51.736 request: 00:21:51.736 { 00:21:51.736 "name": "TLSTEST", 00:21:51.736 "trtype": "tcp", 00:21:51.736 "traddr": "10.0.0.2", 00:21:51.736 "adrfam": "ipv4", 00:21:51.736 "trsvcid": "4420", 00:21:51.736 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:51.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:51.736 "prchk_reftag": false, 00:21:51.736 "prchk_guard": false, 00:21:51.736 "hdgst": false, 00:21:51.736 "ddgst": false, 00:21:51.736 "psk": "/tmp/tmp.ueqrLfRTRa", 00:21:51.736 "method": "bdev_nvme_attach_controller", 00:21:51.736 "req_id": 1 00:21:51.736 } 00:21:51.736 Got JSON-RPC error response 00:21:51.736 response: 00:21:51.736 { 00:21:51.736 "code": -5, 00:21:51.736 "message": "Input/output error" 00:21:51.736 } 00:21:51.736 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 740789 00:21:51.736 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 740789 ']' 00:21:51.736 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 740789 00:21:51.736 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:51.736 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:51.736 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 740789 00:21:51.736 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:51.736 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:51.736 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 740789' 00:21:51.736 killing process with pid 740789 00:21:51.736 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 740789 00:21:51.736 Received shutdown signal, test time was about 10.000000 seconds 00:21:51.736 00:21:51.736 Latency(us) 00:21:51.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.736 =================================================================================================================== 00:21:51.736 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:51.736 [2024-07-15 15:27:01.311251] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:51.736 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 740789 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=741190 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 741190 /var/tmp/bdevperf.sock 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 741190 ']' 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:51.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.997 15:27:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.997 [2024-07-15 15:27:01.477987] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:51.997 [2024-07-15 15:27:01.478061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid741190 ] 00:21:51.997 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.997 [2024-07-15 15:27:01.530680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.997 [2024-07-15 15:27:01.582231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.937 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.937 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:52.937 15:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:52.937 [2024-07-15 15:27:02.392022] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:52.938 [2024-07-15 15:27:02.393421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820760 (9): Bad file descriptor 00:21:52.938 [2024-07-15 15:27:02.394421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:52.938 [2024-07-15 15:27:02.394428] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:52.938 [2024-07-15 15:27:02.394435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:52.938 request: 00:21:52.938 { 00:21:52.938 "name": "TLSTEST", 00:21:52.938 "trtype": "tcp", 00:21:52.938 "traddr": "10.0.0.2", 00:21:52.938 "adrfam": "ipv4", 00:21:52.938 "trsvcid": "4420", 00:21:52.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:52.938 "prchk_reftag": false, 00:21:52.938 "prchk_guard": false, 00:21:52.938 "hdgst": false, 00:21:52.938 "ddgst": false, 00:21:52.938 "method": "bdev_nvme_attach_controller", 00:21:52.938 "req_id": 1 00:21:52.938 } 00:21:52.938 Got JSON-RPC error response 00:21:52.938 response: 00:21:52.938 { 00:21:52.938 "code": -5, 00:21:52.938 "message": "Input/output error" 00:21:52.938 } 00:21:52.938 15:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 741190 00:21:52.938 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 741190 ']' 00:21:52.938 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 741190 00:21:52.938 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:52.938 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:52.938 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 741190 00:21:52.938 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:52.938 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:52.938 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 741190' 00:21:52.938 killing process with pid 741190 00:21:52.938 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 741190 00:21:52.938 Received shutdown signal, test time was about 10.000000 seconds 00:21:52.938 00:21:52.938 Latency(us) 00:21:52.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.938 =================================================================================================================== 00:21:52.938 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:52.938 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 741190 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 735342 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 735342 ']' 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 735342 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 735342 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 735342' 00:21:53.198 killing process with pid 735342 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 735342 00:21:53.198 [2024-07-15 15:27:02.625386] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 735342 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.EfKRaa8e48 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:53.198 15:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.EfKRaa8e48 00:21:53.459 15:27:02 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:53.459 15:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:53.459 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:53.459 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.459 15:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=741539 00:21:53.459 15:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 741539 00:21:53.459 15:27:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:53.459 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 741539 ']' 00:21:53.459 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.459 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:53.459 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.459 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:53.459 15:27:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.459 [2024-07-15 15:27:02.878424] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:53.459 [2024-07-15 15:27:02.878481] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.459 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.459 [2024-07-15 15:27:02.947617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.459 [2024-07-15 15:27:03.012106] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.459 [2024-07-15 15:27:03.012143] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.459 [2024-07-15 15:27:03.012153] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.459 [2024-07-15 15:27:03.012160] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.459 [2024-07-15 15:27:03.012165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.459 [2024-07-15 15:27:03.012184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.087 15:27:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:54.087 15:27:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:54.087 15:27:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:54.087 15:27:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:54.087 15:27:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.087 15:27:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.087 15:27:03 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.EfKRaa8e48 00:21:54.087 15:27:03 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.EfKRaa8e48 00:21:54.087 15:27:03 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:54.348 [2024-07-15 15:27:03.818665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.348 15:27:03 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:54.607 15:27:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:54.607 [2024-07-15 15:27:04.111386] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:54.607 [2024-07-15 15:27:04.111590] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.607 15:27:04 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:54.868 malloc0 00:21:54.868 15:27:04 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:54.868 15:27:04 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EfKRaa8e48 00:21:55.128 [2024-07-15 15:27:04.559293] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:55.128 15:27:04 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EfKRaa8e48 00:21:55.128 15:27:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:55.128 15:27:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:55.128 15:27:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:55.128 15:27:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.EfKRaa8e48' 00:21:55.128 15:27:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:55.128 15:27:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=741905 00:21:55.128 15:27:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:55.128 15:27:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:55.128 15:27:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 741905 /var/tmp/bdevperf.sock 00:21:55.128 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 741905 ']' 00:21:55.128 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.128 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:55.128 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.128 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:55.128 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.128 [2024-07-15 15:27:04.607418] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:21:55.128 [2024-07-15 15:27:04.607469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid741905 ] 00:21:55.128 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.128 [2024-07-15 15:27:04.661058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.128 [2024-07-15 15:27:04.713121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.389 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.389 15:27:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:55.389 15:27:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EfKRaa8e48 00:21:55.389 [2024-07-15 15:27:04.932638] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:55.389 [2024-07-15 15:27:04.932701] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:55.650 TLSTESTn1 00:21:55.650 15:27:05 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:55.650 Running I/O for 10 seconds... 00:22:05.642 00:22:05.642 Latency(us) 00:22:05.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.642 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:05.642 Verification LBA range: start 0x0 length 0x2000 00:22:05.642 TLSTESTn1 : 10.03 3733.91 14.59 0.00 0.00 34220.59 4860.59 58545.49 00:22:05.642 =================================================================================================================== 00:22:05.642 Total : 3733.91 14.59 0.00 0.00 34220.59 4860.59 58545.49 00:22:05.642 0 00:22:05.642 15:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.642 15:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 741905 00:22:05.643 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 741905 ']' 00:22:05.643 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 741905 00:22:05.643 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:05.643 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:05.643 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 741905 00:22:05.643 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:05.643 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:05.643 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 741905' 00:22:05.643 killing process with pid 741905 00:22:05.643 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 741905 00:22:05.643 Received shutdown signal, test time was about 10.000000 seconds 00:22:05.643 00:22:05.643 Latency(us) 00:22:05.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.643 =================================================================================================================== 00:22:05.643 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:05.643 [2024-07-15 15:27:15.245723] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:05.643 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 741905 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.EfKRaa8e48 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EfKRaa8e48 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EfKRaa8e48 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EfKRaa8e48 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.EfKRaa8e48' 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=744399 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 744399 /var/tmp/bdevperf.sock 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 744399 ']' 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:05.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.903 15:27:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.903 [2024-07-15 15:27:15.422365] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:05.903 [2024-07-15 15:27:15.422418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid744399 ] 00:22:05.903 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.903 [2024-07-15 15:27:15.478022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.163 [2024-07-15 15:27:15.528465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.734 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.734 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:06.734 15:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EfKRaa8e48 00:22:06.734 [2024-07-15 15:27:16.325387] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:06.734 [2024-07-15 15:27:16.325432] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:06.734 [2024-07-15 15:27:16.325438] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.EfKRaa8e48 00:22:06.734 request: 00:22:06.734 { 00:22:06.734 "name": "TLSTEST", 00:22:06.734 "trtype": "tcp", 00:22:06.734 "traddr": "10.0.0.2", 00:22:06.734 "adrfam": "ipv4", 00:22:06.734 "trsvcid": "4420", 00:22:06.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:06.734 "prchk_reftag": false, 00:22:06.734 "prchk_guard": false, 00:22:06.734 "hdgst": false, 00:22:06.734 "ddgst": false, 00:22:06.734 "psk": "/tmp/tmp.EfKRaa8e48", 00:22:06.734 "method": "bdev_nvme_attach_controller", 00:22:06.734 "req_id": 1 00:22:06.734 } 00:22:06.734 Got JSON-RPC error response 00:22:06.734 response: 00:22:06.734 { 00:22:06.734 "code": -1, 00:22:06.734 "message": "Operation not permitted" 00:22:06.734 } 00:22:06.993 15:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 744399 00:22:06.993 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 744399 ']' 00:22:06.993 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 744399 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 744399 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 744399' 00:22:06.994 killing process with pid 744399 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 744399 00:22:06.994 Received shutdown signal, test time was about 10.000000 seconds 00:22:06.994 00:22:06.994 Latency(us) 00:22:06.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.994 =================================================================================================================== 00:22:06.994 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 744399 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 741539 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 741539 ']' 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 741539 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 741539 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 741539' 00:22:06.994 killing process with pid 741539 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 741539 00:22:06.994 [2024-07-15 15:27:16.570812] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:06.994 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 741539 00:22:07.253 15:27:16 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:07.253 15:27:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:07.253 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:07.253 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.253 15:27:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=744735 00:22:07.253 15:27:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 744735 00:22:07.253 15:27:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:07.253 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 744735 ']' 00:22:07.253 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.253 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.253 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.253 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.253 15:27:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.253 [2024-07-15 15:27:16.772860] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:07.253 [2024-07-15 15:27:16.772922] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.253 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.253 [2024-07-15 15:27:16.843176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.512 [2024-07-15 15:27:16.906787] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.512 [2024-07-15 15:27:16.906826] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.512 [2024-07-15 15:27:16.906833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.512 [2024-07-15 15:27:16.906839] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.512 [2024-07-15 15:27:16.906845] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.512 [2024-07-15 15:27:16.906870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.081 15:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:08.081 15:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:08.081 15:27:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:08.081 15:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:08.081 15:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.081 15:27:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.081 15:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.EfKRaa8e48 00:22:08.081 15:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:08.081 15:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.EfKRaa8e48 00:22:08.081 15:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:22:08.081 15:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.081 15:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:22:08.081 15:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.081 15:27:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.EfKRaa8e48 00:22:08.081 15:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.EfKRaa8e48 00:22:08.081 15:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:08.341 [2024-07-15 15:27:17.709248] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.341 15:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:08.341 15:27:17 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:08.600 [2024-07-15 15:27:18.018008] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:08.600 [2024-07-15 15:27:18.018212] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.600 15:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:08.600 malloc0 00:22:08.600 15:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:08.860 15:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EfKRaa8e48 00:22:08.860 [2024-07-15 15:27:18.453823] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:08.860 [2024-07-15 15:27:18.453847] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:08.860 [2024-07-15 15:27:18.453874] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:08.860 request: 00:22:08.860 { 00:22:08.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.860 "host": "nqn.2016-06.io.spdk:host1", 00:22:08.860 "psk": "/tmp/tmp.EfKRaa8e48", 00:22:08.860 "method": "nvmf_subsystem_add_host", 00:22:08.860 "req_id": 1 00:22:08.860 } 00:22:08.860 Got JSON-RPC error response 00:22:08.861 response: 00:22:08.861 { 00:22:08.861 "code": -32603, 00:22:08.861 "message": "Internal error" 00:22:08.861 } 00:22:08.861 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:08.861 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:08.861 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:08.861 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:08.861 15:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 744735 00:22:08.861 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 744735 ']' 00:22:08.861 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 744735 00:22:08.861 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:08.861 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:08.861 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 744735 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 744735' 00:22:09.121 killing process with pid 744735 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 744735 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 744735 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.EfKRaa8e48 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=745120 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 745120 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 745120 ']' 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:09.121 15:27:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.121 [2024-07-15 15:27:18.733359] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:09.121 [2024-07-15 15:27:18.733409] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.380 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.380 [2024-07-15 15:27:18.802148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.380 [2024-07-15 15:27:18.864609] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.380 [2024-07-15 15:27:18.864646] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.380 [2024-07-15 15:27:18.864653] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.380 [2024-07-15 15:27:18.864659] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.380 [2024-07-15 15:27:18.864665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.380 [2024-07-15 15:27:18.864690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.950 15:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.950 15:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:09.950 15:27:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:09.950 15:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:09.950 15:27:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.950 15:27:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.950 15:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.EfKRaa8e48 00:22:09.950 15:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.EfKRaa8e48 00:22:09.950 15:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:10.211 [2024-07-15 15:27:19.703198] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.211 15:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:10.471 15:27:19 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:10.471 [2024-07-15 15:27:19.995915] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:10.471 [2024-07-15 15:27:19.996115] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.471 15:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:10.731 malloc0 00:22:10.731 15:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:10.731 15:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EfKRaa8e48 00:22:10.991 [2024-07-15 15:27:20.451748] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:10.991 15:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=745477 00:22:10.991 15:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:10.991 15:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:10.991 15:27:20 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 745477 /var/tmp/bdevperf.sock 00:22:10.991 15:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 745477 ']' 00:22:10.991 15:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.991 15:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:10.991 15:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.991 15:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:10.991 15:27:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.991 [2024-07-15 15:27:20.513765] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:10.992 [2024-07-15 15:27:20.513818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid745477 ] 00:22:10.992 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.992 [2024-07-15 15:27:20.567908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.251 [2024-07-15 15:27:20.619947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.821 15:27:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.821 15:27:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:11.821 15:27:21 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EfKRaa8e48 00:22:11.821 [2024-07-15 15:27:21.404731] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:11.821 [2024-07-15 15:27:21.404794] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:12.081 TLSTESTn1 00:22:12.081 15:27:21 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:12.342 15:27:21 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:12.342 "subsystems": [ 00:22:12.342 { 00:22:12.342 "subsystem": "keyring", 00:22:12.342 "config": [] 00:22:12.342 }, 00:22:12.342 { 00:22:12.342 "subsystem": "iobuf", 00:22:12.342 "config": [ 00:22:12.342 { 00:22:12.342 "method": "iobuf_set_options", 00:22:12.342 "params": { 00:22:12.342 "small_pool_count": 8192, 00:22:12.342 "large_pool_count": 1024, 00:22:12.342 "small_bufsize": 8192, 00:22:12.342 "large_bufsize": 135168 00:22:12.342 } 00:22:12.342 } 00:22:12.342 ] 00:22:12.342 }, 00:22:12.342 { 00:22:12.342 "subsystem": "sock", 00:22:12.342 "config": [ 00:22:12.342 { 00:22:12.342 "method": "sock_set_default_impl", 00:22:12.342 "params": { 00:22:12.342 "impl_name": "posix" 00:22:12.342 } 00:22:12.342 }, 00:22:12.342 { 00:22:12.342 "method": "sock_impl_set_options", 00:22:12.342 "params": { 00:22:12.342 "impl_name": "ssl", 00:22:12.342 "recv_buf_size": 4096, 00:22:12.342 "send_buf_size": 4096, 00:22:12.342 "enable_recv_pipe": true, 00:22:12.342 "enable_quickack": false, 00:22:12.342 "enable_placement_id": 0, 00:22:12.342 "enable_zerocopy_send_server": true, 00:22:12.342 "enable_zerocopy_send_client": false, 00:22:12.342 "zerocopy_threshold": 0, 00:22:12.342 "tls_version": 0, 00:22:12.342 "enable_ktls": false 00:22:12.342 } 00:22:12.342 }, 00:22:12.342 { 00:22:12.342 "method": "sock_impl_set_options", 00:22:12.342 "params": { 00:22:12.342 "impl_name": "posix", 00:22:12.342 "recv_buf_size": 2097152, 00:22:12.342 "send_buf_size": 2097152, 00:22:12.342 "enable_recv_pipe": true, 00:22:12.342 "enable_quickack": false, 00:22:12.342 "enable_placement_id": 0, 00:22:12.342 "enable_zerocopy_send_server": true, 00:22:12.342 "enable_zerocopy_send_client": false, 00:22:12.342 "zerocopy_threshold": 0, 00:22:12.342 "tls_version": 0, 00:22:12.342 "enable_ktls": false 00:22:12.342 } 00:22:12.342 } 00:22:12.342 ] 00:22:12.342 }, 00:22:12.342 { 00:22:12.342 "subsystem": "vmd", 00:22:12.342 "config": [] 00:22:12.342 }, 00:22:12.342 { 00:22:12.342 "subsystem": "accel", 00:22:12.342 "config": [ 00:22:12.342 { 00:22:12.342 "method": "accel_set_options", 00:22:12.342 "params": { 00:22:12.342 "small_cache_size": 128, 00:22:12.342 "large_cache_size": 16, 00:22:12.342 "task_count": 2048, 00:22:12.342 "sequence_count": 2048, 00:22:12.342 "buf_count": 2048 00:22:12.342 } 00:22:12.342 } 00:22:12.342 ] 00:22:12.342 }, 00:22:12.342 { 00:22:12.342 "subsystem": "bdev", 00:22:12.342 "config": [ 00:22:12.342 { 00:22:12.342 "method": "bdev_set_options", 00:22:12.342 "params": { 00:22:12.342 "bdev_io_pool_size": 65535, 00:22:12.342 "bdev_io_cache_size": 256, 00:22:12.342 "bdev_auto_examine": true, 00:22:12.342 "iobuf_small_cache_size": 128, 00:22:12.342 "iobuf_large_cache_size": 16 00:22:12.342 } 00:22:12.342 }, 00:22:12.342 { 00:22:12.342 "method": "bdev_raid_set_options", 00:22:12.342 "params": { 00:22:12.342 "process_window_size_kb": 1024 00:22:12.342 } 00:22:12.342 }, 00:22:12.342 { 00:22:12.342 "method": "bdev_iscsi_set_options", 00:22:12.342 "params": { 00:22:12.342 "timeout_sec": 30 00:22:12.343 } 00:22:12.343 }, 00:22:12.343 { 00:22:12.343 "method": "bdev_nvme_set_options", 00:22:12.343 "params": { 00:22:12.343 "action_on_timeout": "none", 00:22:12.343 "timeout_us": 0, 00:22:12.343 "timeout_admin_us": 0, 00:22:12.343 "keep_alive_timeout_ms": 10000, 00:22:12.343 "arbitration_burst": 0, 00:22:12.343 "low_priority_weight": 0, 00:22:12.343 "medium_priority_weight": 0, 00:22:12.343 "high_priority_weight": 0, 00:22:12.343 "nvme_adminq_poll_period_us": 10000, 00:22:12.343 "nvme_ioq_poll_period_us": 0, 00:22:12.343 "io_queue_requests": 0, 00:22:12.343 "delay_cmd_submit": true, 00:22:12.343 "transport_retry_count": 4, 00:22:12.343 "bdev_retry_count": 3, 00:22:12.343 "transport_ack_timeout": 0, 00:22:12.343 "ctrlr_loss_timeout_sec": 0, 00:22:12.343 "reconnect_delay_sec": 0, 00:22:12.343 "fast_io_fail_timeout_sec": 0, 00:22:12.343 "disable_auto_failback": false, 00:22:12.343 "generate_uuids": false, 00:22:12.343 "transport_tos": 0, 00:22:12.343 "nvme_error_stat": false, 00:22:12.343 "rdma_srq_size": 0, 00:22:12.343 "io_path_stat": false, 00:22:12.343 "allow_accel_sequence": false, 00:22:12.343 "rdma_max_cq_size": 0, 00:22:12.343 "rdma_cm_event_timeout_ms": 0, 00:22:12.343 "dhchap_digests": [ 00:22:12.343 "sha256", 00:22:12.343 "sha384", 00:22:12.343 "sha512" 00:22:12.343 ], 00:22:12.343 "dhchap_dhgroups": [ 00:22:12.343 "null", 00:22:12.343 "ffdhe2048", 00:22:12.343 "ffdhe3072", 00:22:12.343 "ffdhe4096", 00:22:12.343 "ffdhe6144", 00:22:12.343 "ffdhe8192" 00:22:12.343 ] 00:22:12.343 } 00:22:12.343 }, 00:22:12.343 { 00:22:12.343 "method": "bdev_nvme_set_hotplug", 00:22:12.343 "params": { 00:22:12.343 "period_us": 100000, 00:22:12.343 "enable": false 00:22:12.343 } 00:22:12.343 }, 00:22:12.343 { 00:22:12.343 "method": "bdev_malloc_create", 00:22:12.343 "params": { 00:22:12.343 "name": "malloc0", 00:22:12.343 "num_blocks": 8192, 00:22:12.343 "block_size": 4096, 00:22:12.343 "physical_block_size": 4096, 00:22:12.343 "uuid": "ec8cf505-10e7-4e67-8dd1-2fffa1ed8cb9", 00:22:12.343 "optimal_io_boundary": 0 00:22:12.343 } 00:22:12.343 }, 00:22:12.343 { 00:22:12.343 "method": "bdev_wait_for_examine" 00:22:12.343 } 00:22:12.343 ] 00:22:12.343 }, 00:22:12.343 { 00:22:12.343 "subsystem": "nbd", 00:22:12.343 "config": [] 00:22:12.343 }, 00:22:12.343 { 00:22:12.343 "subsystem": "scheduler", 00:22:12.343 "config": [ 00:22:12.343 { 00:22:12.343 "method": "framework_set_scheduler", 00:22:12.343 "params": { 00:22:12.343 "name": "static" 00:22:12.343 } 00:22:12.343 } 00:22:12.343 ] 00:22:12.343 }, 00:22:12.343 { 00:22:12.343 "subsystem": "nvmf", 00:22:12.343 "config": [ 00:22:12.343 { 00:22:12.343 "method": "nvmf_set_config", 00:22:12.343 "params": { 00:22:12.343 "discovery_filter": "match_any", 00:22:12.343 "admin_cmd_passthru": { 00:22:12.343 "identify_ctrlr": false 00:22:12.343 } 00:22:12.343 } 00:22:12.343 }, 00:22:12.343 { 00:22:12.343 "method": "nvmf_set_max_subsystems", 00:22:12.343 "params": { 00:22:12.343 "max_subsystems": 1024 00:22:12.343 } 00:22:12.343 }, 00:22:12.343 { 00:22:12.343 "method": "nvmf_set_crdt", 00:22:12.343 "params": { 00:22:12.343 "crdt1": 0, 00:22:12.343 "crdt2": 0, 00:22:12.343 "crdt3": 0 00:22:12.343 } 00:22:12.343 }, 00:22:12.343 { 00:22:12.343 "method": "nvmf_create_transport", 00:22:12.343 "params": { 00:22:12.343 "trtype": "TCP", 00:22:12.343 "max_queue_depth": 128, 00:22:12.343 "max_io_qpairs_per_ctrlr": 127, 00:22:12.343 "in_capsule_data_size": 4096, 00:22:12.343 "max_io_size": 131072, 00:22:12.343 "io_unit_size": 131072, 00:22:12.343 "max_aq_depth": 128, 00:22:12.343 "num_shared_buffers": 511, 00:22:12.343 "buf_cache_size": 4294967295, 00:22:12.343 "dif_insert_or_strip": false, 00:22:12.343 "zcopy": false, 00:22:12.343 "c2h_success": false, 00:22:12.343 "sock_priority": 0, 00:22:12.343 "abort_timeout_sec": 1, 00:22:12.343 "ack_timeout": 0, 00:22:12.343 "data_wr_pool_size": 0 00:22:12.343 } 00:22:12.343 }, 00:22:12.343 { 00:22:12.343 "method": "nvmf_create_subsystem", 00:22:12.343 "params": { 00:22:12.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.343 "allow_any_host": false, 00:22:12.343 "serial_number": "SPDK00000000000001", 00:22:12.343 "model_number": "SPDK bdev Controller", 00:22:12.343 "max_namespaces": 10, 00:22:12.343 "min_cntlid": 1, 00:22:12.343 "max_cntlid": 65519, 00:22:12.343 "ana_reporting": false 00:22:12.343 } 00:22:12.343 }, 00:22:12.343 { 00:22:12.343 "method": "nvmf_subsystem_add_host", 00:22:12.343 "params": { 00:22:12.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.343 "host": "nqn.2016-06.io.spdk:host1", 00:22:12.343 "psk": "/tmp/tmp.EfKRaa8e48" 00:22:12.343 } 00:22:12.343 }, 00:22:12.343 { 00:22:12.343 "method": "nvmf_subsystem_add_ns", 00:22:12.343 "params": { 00:22:12.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.343 "namespace": { 00:22:12.343 "nsid": 1, 00:22:12.343 "bdev_name": "malloc0", 00:22:12.343 "nguid": "EC8CF50510E74E678DD12FFFA1ED8CB9", 00:22:12.343 "uuid": "ec8cf505-10e7-4e67-8dd1-2fffa1ed8cb9", 00:22:12.343 "no_auto_visible": false 00:22:12.343 } 00:22:12.343 } 00:22:12.343 }, 00:22:12.343 { 00:22:12.343 "method": "nvmf_subsystem_add_listener", 00:22:12.343 "params": { 00:22:12.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.343 "listen_address": { 00:22:12.343 "trtype": "TCP", 00:22:12.343 "adrfam": "IPv4", 00:22:12.343 "traddr": "10.0.0.2", 00:22:12.343 "trsvcid": "4420" 00:22:12.343 }, 00:22:12.343 "secure_channel": true 00:22:12.343 } 00:22:12.343 } 00:22:12.343 ] 00:22:12.343 } 00:22:12.343 ] 00:22:12.343 }' 00:22:12.343 15:27:21 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:12.604 15:27:21 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:12.604 "subsystems": [ 00:22:12.604 { 00:22:12.604 "subsystem": "keyring", 00:22:12.604 "config": [] 00:22:12.604 }, 00:22:12.604 { 00:22:12.604 "subsystem": "iobuf", 00:22:12.604 "config": [ 00:22:12.604 { 00:22:12.604 "method": "iobuf_set_options", 00:22:12.604 "params": { 00:22:12.604 "small_pool_count": 8192, 00:22:12.604 "large_pool_count": 1024, 00:22:12.604 "small_bufsize": 8192, 00:22:12.604 "large_bufsize": 135168 00:22:12.604 } 00:22:12.604 } 00:22:12.604 ] 00:22:12.604 }, 00:22:12.604 { 00:22:12.604 "subsystem": "sock", 00:22:12.604 "config": [ 00:22:12.604 { 00:22:12.604 "method": "sock_set_default_impl", 00:22:12.604 "params": { 00:22:12.604 "impl_name": "posix" 00:22:12.604 } 00:22:12.604 }, 00:22:12.604 { 00:22:12.604 "method": "sock_impl_set_options", 00:22:12.604 "params": { 00:22:12.604 "impl_name": "ssl", 00:22:12.604 "recv_buf_size": 4096, 00:22:12.604 "send_buf_size": 4096, 00:22:12.604 "enable_recv_pipe": true, 00:22:12.604 "enable_quickack": false, 00:22:12.604 "enable_placement_id": 0, 00:22:12.604 "enable_zerocopy_send_server": true, 00:22:12.604 "enable_zerocopy_send_client": false, 00:22:12.604 "zerocopy_threshold": 0, 00:22:12.604 "tls_version": 0, 00:22:12.604 "enable_ktls": false 00:22:12.604 } 00:22:12.604 }, 00:22:12.604 { 00:22:12.604 "method": "sock_impl_set_options", 00:22:12.604 "params": { 00:22:12.604 "impl_name": "posix", 00:22:12.604 "recv_buf_size": 2097152, 00:22:12.604 "send_buf_size": 2097152, 00:22:12.604 "enable_recv_pipe": true, 00:22:12.605 "enable_quickack": false, 00:22:12.605 "enable_placement_id": 0, 00:22:12.605 "enable_zerocopy_send_server": true, 00:22:12.605 "enable_zerocopy_send_client": false, 00:22:12.605 "zerocopy_threshold": 0, 00:22:12.605 "tls_version": 0, 00:22:12.605 "enable_ktls": false 00:22:12.605 } 00:22:12.605 } 00:22:12.605 ] 00:22:12.605 }, 00:22:12.605 { 00:22:12.605 "subsystem": "vmd", 00:22:12.605 "config": [] 00:22:12.605 }, 00:22:12.605 { 00:22:12.605 "subsystem": "accel", 00:22:12.605 "config": [ 00:22:12.605 { 00:22:12.605 "method": "accel_set_options", 00:22:12.605 "params": { 00:22:12.605 "small_cache_size": 128, 00:22:12.605 "large_cache_size": 16, 00:22:12.605 "task_count": 2048, 00:22:12.605 "sequence_count": 2048, 00:22:12.605 "buf_count": 2048 00:22:12.605 } 00:22:12.605 } 00:22:12.605 ] 00:22:12.605 }, 00:22:12.605 { 00:22:12.605 "subsystem": "bdev", 00:22:12.605 "config": [ 00:22:12.605 { 00:22:12.605 "method": "bdev_set_options", 00:22:12.605 "params": { 00:22:12.605 "bdev_io_pool_size": 65535, 00:22:12.605 "bdev_io_cache_size": 256, 00:22:12.605 "bdev_auto_examine": true, 00:22:12.605 "iobuf_small_cache_size": 128, 00:22:12.605 "iobuf_large_cache_size": 16 00:22:12.605 } 00:22:12.605 }, 00:22:12.605 { 00:22:12.605 "method": "bdev_raid_set_options", 00:22:12.605 "params": { 00:22:12.605 "process_window_size_kb": 1024 00:22:12.605 } 00:22:12.605 }, 00:22:12.605 { 00:22:12.605 "method": "bdev_iscsi_set_options", 00:22:12.605 "params": { 00:22:12.605 "timeout_sec": 30 00:22:12.605 } 00:22:12.605 }, 00:22:12.605 { 00:22:12.605 "method": "bdev_nvme_set_options", 00:22:12.605 "params": { 00:22:12.605 "action_on_timeout": "none", 00:22:12.605 "timeout_us": 0, 00:22:12.605 "timeout_admin_us": 0, 00:22:12.605 "keep_alive_timeout_ms": 10000, 00:22:12.605 "arbitration_burst": 0, 00:22:12.605 "low_priority_weight": 0, 00:22:12.605 "medium_priority_weight": 0, 00:22:12.605 "high_priority_weight": 0, 00:22:12.605 "nvme_adminq_poll_period_us": 10000, 00:22:12.605 "nvme_ioq_poll_period_us": 0, 00:22:12.605 "io_queue_requests": 512, 00:22:12.605 "delay_cmd_submit": true, 00:22:12.605 "transport_retry_count": 4, 00:22:12.605 "bdev_retry_count": 3, 00:22:12.605 "transport_ack_timeout": 0, 00:22:12.605 "ctrlr_loss_timeout_sec": 0, 00:22:12.605 "reconnect_delay_sec": 0, 00:22:12.605 "fast_io_fail_timeout_sec": 0, 00:22:12.605 "disable_auto_failback": false, 00:22:12.605 "generate_uuids": false, 00:22:12.605 "transport_tos": 0, 00:22:12.605 "nvme_error_stat": false, 00:22:12.605 "rdma_srq_size": 0, 00:22:12.605 "io_path_stat": false, 00:22:12.605 "allow_accel_sequence": false, 00:22:12.605 "rdma_max_cq_size": 0, 00:22:12.605 "rdma_cm_event_timeout_ms": 0, 00:22:12.605 "dhchap_digests": [ 00:22:12.605 "sha256", 00:22:12.605 "sha384", 00:22:12.605 "sha512" 00:22:12.605 ], 00:22:12.605 "dhchap_dhgroups": [ 00:22:12.605 "null", 00:22:12.605 "ffdhe2048", 00:22:12.605 "ffdhe3072", 00:22:12.605 "ffdhe4096", 00:22:12.605 "ffdhe6144", 00:22:12.605 "ffdhe8192" 00:22:12.605 ] 00:22:12.605 } 00:22:12.605 }, 00:22:12.605 { 00:22:12.605 "method": "bdev_nvme_attach_controller", 00:22:12.605 "params": { 00:22:12.605 "name": "TLSTEST", 00:22:12.605 "trtype": "TCP", 00:22:12.605 "adrfam": "IPv4", 00:22:12.605 "traddr": "10.0.0.2", 00:22:12.605 "trsvcid": "4420", 00:22:12.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.605 "prchk_reftag": false, 00:22:12.605 "prchk_guard": false, 00:22:12.605 "ctrlr_loss_timeout_sec": 0, 00:22:12.605 "reconnect_delay_sec": 0, 00:22:12.605 "fast_io_fail_timeout_sec": 0, 00:22:12.605 "psk": "/tmp/tmp.EfKRaa8e48", 00:22:12.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.605 "hdgst": false, 00:22:12.605 "ddgst": false 00:22:12.605 } 00:22:12.605 }, 00:22:12.605 { 00:22:12.605 "method": "bdev_nvme_set_hotplug", 00:22:12.605 "params": { 00:22:12.605 "period_us": 100000, 00:22:12.605 "enable": false 00:22:12.605 } 00:22:12.605 }, 00:22:12.605 { 00:22:12.605 "method": "bdev_wait_for_examine" 00:22:12.605 } 00:22:12.605 ] 00:22:12.605 }, 00:22:12.605 { 00:22:12.605 "subsystem": "nbd", 00:22:12.605 "config": [] 00:22:12.605 } 00:22:12.605 ] 00:22:12.605 }' 00:22:12.605 15:27:21 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 745477 00:22:12.605 15:27:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 745477 ']' 00:22:12.605 15:27:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 745477 00:22:12.605 15:27:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:12.605 15:27:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:12.605 15:27:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 745477 00:22:12.605 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:12.605 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:12.605 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 745477' 00:22:12.605 killing process with pid 745477 00:22:12.605 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 745477 00:22:12.605 Received shutdown signal, test time was about 10.000000 seconds 00:22:12.605 00:22:12.605 Latency(us) 00:22:12.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.605 =================================================================================================================== 00:22:12.605 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:12.605 [2024-07-15 15:27:22.038561] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:12.605 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 745477 00:22:12.605 15:27:22 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 745120 00:22:12.605 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 745120 ']' 00:22:12.605 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 745120 00:22:12.605 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:12.605 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:12.605 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 745120 00:22:12.605 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:12.605 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:12.605 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 745120' 00:22:12.605 killing process with pid 745120 00:22:12.605 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 745120 00:22:12.605 [2024-07-15 15:27:22.205502] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:12.605 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 745120 00:22:12.866 15:27:22 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:12.866 15:27:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:12.866 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:12.866 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.866 15:27:22 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:12.866 "subsystems": [ 00:22:12.866 { 00:22:12.866 "subsystem": "keyring", 00:22:12.866 "config": [] 00:22:12.866 }, 00:22:12.866 { 00:22:12.866 "subsystem": "iobuf", 00:22:12.866 "config": [ 00:22:12.866 { 00:22:12.866 "method": "iobuf_set_options", 00:22:12.866 "params": { 00:22:12.866 "small_pool_count": 8192, 00:22:12.866 "large_pool_count": 1024, 00:22:12.866 "small_bufsize": 8192, 00:22:12.866 "large_bufsize": 135168 00:22:12.866 } 00:22:12.866 } 00:22:12.866 ] 00:22:12.866 }, 00:22:12.866 { 00:22:12.866 "subsystem": "sock", 00:22:12.866 "config": [ 00:22:12.866 { 00:22:12.866 "method": "sock_set_default_impl", 00:22:12.866 "params": { 00:22:12.866 "impl_name": "posix" 00:22:12.866 } 00:22:12.866 }, 00:22:12.866 { 00:22:12.866 "method": "sock_impl_set_options", 00:22:12.866 "params": { 00:22:12.866 "impl_name": "ssl", 00:22:12.866 "recv_buf_size": 4096, 00:22:12.866 "send_buf_size": 4096, 00:22:12.866 "enable_recv_pipe": true, 00:22:12.866 "enable_quickack": false, 00:22:12.866 "enable_placement_id": 0, 00:22:12.866 "enable_zerocopy_send_server": true, 00:22:12.866 "enable_zerocopy_send_client": false, 00:22:12.866 "zerocopy_threshold": 0, 00:22:12.866 "tls_version": 0, 00:22:12.866 "enable_ktls": false 00:22:12.866 } 00:22:12.866 }, 00:22:12.866 { 00:22:12.866 "method": "sock_impl_set_options", 00:22:12.866 "params": { 00:22:12.866 "impl_name": "posix", 00:22:12.866 "recv_buf_size": 2097152, 00:22:12.866 "send_buf_size": 2097152, 00:22:12.866 "enable_recv_pipe": true, 00:22:12.866 "enable_quickack": false, 00:22:12.866 "enable_placement_id": 0, 00:22:12.866 "enable_zerocopy_send_server": true, 00:22:12.866 "enable_zerocopy_send_client": false, 00:22:12.866 "zerocopy_threshold": 0, 00:22:12.866 "tls_version": 0, 00:22:12.866 "enable_ktls": false 00:22:12.866 } 00:22:12.866 } 00:22:12.866 ] 00:22:12.866 }, 00:22:12.866 { 00:22:12.866 "subsystem": "vmd", 00:22:12.866 "config": [] 00:22:12.866 }, 00:22:12.866 { 00:22:12.866 "subsystem": "accel", 00:22:12.866 "config": [ 00:22:12.866 { 00:22:12.866 "method": "accel_set_options", 00:22:12.866 "params": { 00:22:12.866 "small_cache_size": 128, 00:22:12.866 "large_cache_size": 16, 00:22:12.866 "task_count": 2048, 00:22:12.866 "sequence_count": 2048, 00:22:12.866 "buf_count": 2048 00:22:12.866 } 00:22:12.866 } 00:22:12.866 ] 00:22:12.866 }, 00:22:12.866 { 00:22:12.866 "subsystem": "bdev", 00:22:12.866 "config": [ 00:22:12.866 { 00:22:12.866 "method": "bdev_set_options", 00:22:12.866 "params": { 00:22:12.866 "bdev_io_pool_size": 65535, 00:22:12.866 "bdev_io_cache_size": 256, 00:22:12.866 "bdev_auto_examine": true, 00:22:12.866 "iobuf_small_cache_size": 128, 00:22:12.866 "iobuf_large_cache_size": 16 00:22:12.866 } 00:22:12.866 }, 00:22:12.866 { 00:22:12.866 "method": "bdev_raid_set_options", 00:22:12.866 "params": { 00:22:12.866 "process_window_size_kb": 1024 00:22:12.866 } 00:22:12.866 }, 00:22:12.866 { 00:22:12.866 "method": "bdev_iscsi_set_options", 00:22:12.866 "params": { 00:22:12.866 "timeout_sec": 30 00:22:12.866 } 00:22:12.866 }, 00:22:12.866 { 00:22:12.866 "method": "bdev_nvme_set_options", 00:22:12.866 "params": { 00:22:12.866 "action_on_timeout": "none", 00:22:12.866 "timeout_us": 0, 00:22:12.866 "timeout_admin_us": 0, 00:22:12.866 "keep_alive_timeout_ms": 10000, 00:22:12.866 "arbitration_burst": 0, 00:22:12.866 "low_priority_weight": 0, 00:22:12.866 "medium_priority_weight": 0, 00:22:12.866 "high_priority_weight": 0, 00:22:12.866 "nvme_adminq_poll_period_us": 10000, 00:22:12.866 "nvme_ioq_poll_period_us": 0, 00:22:12.866 "io_queue_requests": 0, 00:22:12.866 "delay_cmd_submit": true, 00:22:12.866 "transport_retry_count": 4, 00:22:12.866 "bdev_retry_count": 3, 00:22:12.866 "transport_ack_timeout": 0, 00:22:12.866 "ctrlr_loss_timeout_sec": 0, 00:22:12.866 "reconnect_delay_sec": 0, 00:22:12.866 "fast_io_fail_timeout_sec": 0, 00:22:12.866 "disable_auto_failback": false, 00:22:12.866 "generate_uuids": false, 00:22:12.866 "transport_tos": 0, 00:22:12.866 "nvme_error_stat": false, 00:22:12.866 "rdma_srq_size": 0, 00:22:12.866 "io_path_stat": false, 00:22:12.866 "allow_accel_sequence": false, 00:22:12.866 "rdma_max_cq_size": 0, 00:22:12.866 "rdma_cm_event_timeout_ms": 0, 00:22:12.866 "dhchap_digests": [ 00:22:12.866 "sha256", 00:22:12.866 "sha384", 00:22:12.866 "sha512" 00:22:12.866 ], 00:22:12.866 "dhchap_dhgroups": [ 00:22:12.866 "null", 00:22:12.866 "ffdhe2048", 00:22:12.866 "ffdhe3072", 00:22:12.867 "ffdhe4096", 00:22:12.867 "ffdhe6144", 00:22:12.867 "ffdhe8192" 00:22:12.867 ] 00:22:12.867 } 00:22:12.867 }, 00:22:12.867 { 00:22:12.867 "method": "bdev_nvme_set_hotplug", 00:22:12.867 "params": { 00:22:12.867 "period_us": 100000, 00:22:12.867 "enable": false 00:22:12.867 } 00:22:12.867 }, 00:22:12.867 { 00:22:12.867 "method": "bdev_malloc_create", 00:22:12.867 "params": { 00:22:12.867 "name": "malloc0", 00:22:12.867 "num_blocks": 8192, 00:22:12.867 "block_size": 4096, 00:22:12.867 "physical_block_size": 4096, 00:22:12.867 "uuid": "ec8cf505-10e7-4e67-8dd1-2fffa1ed8cb9", 00:22:12.867 "optimal_io_boundary": 0 00:22:12.867 } 00:22:12.867 }, 00:22:12.867 { 00:22:12.867 "method": "bdev_wait_for_examine" 00:22:12.867 } 00:22:12.867 ] 00:22:12.867 }, 00:22:12.867 { 00:22:12.867 "subsystem": "nbd", 00:22:12.867 "config": [] 00:22:12.867 }, 00:22:12.867 { 00:22:12.867 "subsystem": "scheduler", 00:22:12.867 "config": [ 00:22:12.867 { 00:22:12.867 "method": "framework_set_scheduler", 00:22:12.867 "params": { 00:22:12.867 "name": "static" 00:22:12.867 } 00:22:12.867 } 00:22:12.867 ] 00:22:12.867 }, 00:22:12.867 { 00:22:12.867 "subsystem": "nvmf", 00:22:12.867 "config": [ 00:22:12.867 { 00:22:12.867 "method": "nvmf_set_config", 00:22:12.867 "params": { 00:22:12.867 "discovery_filter": "match_any", 00:22:12.867 "admin_cmd_passthru": { 00:22:12.867 "identify_ctrlr": false 00:22:12.867 } 00:22:12.867 } 00:22:12.867 }, 00:22:12.867 { 00:22:12.867 "method": "nvmf_set_max_subsystems", 00:22:12.867 "params": { 00:22:12.867 "max_subsystems": 1024 00:22:12.867 } 00:22:12.867 }, 00:22:12.867 { 00:22:12.867 "method": "nvmf_set_crdt", 00:22:12.867 "params": { 00:22:12.867 "crdt1": 0, 00:22:12.867 "crdt2": 0, 00:22:12.867 "crdt3": 0 00:22:12.867 } 00:22:12.867 }, 00:22:12.867 { 00:22:12.867 "method": "nvmf_create_transport", 00:22:12.867 "params": { 00:22:12.867 "trtype": "TCP", 00:22:12.867 "max_queue_depth": 128, 00:22:12.867 "max_io_qpairs_per_ctrlr": 127, 00:22:12.867 "in_capsule_data_size": 4096, 00:22:12.867 "max_io_size": 131072, 00:22:12.867 "io_unit_size": 131072, 00:22:12.867 "max_aq_depth": 128, 00:22:12.867 "num_shared_buffers": 511, 00:22:12.867 "buf_cache_size": 4294967295, 00:22:12.867 "dif_insert_or_strip": false, 00:22:12.867 "zcopy": false, 00:22:12.867 "c2h_success": false, 00:22:12.867 "sock_priority": 0, 00:22:12.867 "abort_timeout_sec": 1, 00:22:12.867 "ack_timeout": 0, 00:22:12.867 "data_wr_pool_size": 0 00:22:12.867 } 00:22:12.867 }, 00:22:12.867 { 00:22:12.867 "method": "nvmf_create_subsystem", 00:22:12.867 "params": { 00:22:12.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.867 "allow_any_host": false, 00:22:12.867 "serial_number": "SPDK00000000000001", 00:22:12.867 "model_number": "SPDK bdev Controller", 00:22:12.867 "max_namespaces": 10, 00:22:12.867 "min_cntlid": 1, 00:22:12.867 "max_cntlid": 65519, 00:22:12.867 "ana_reporting": false 00:22:12.867 } 00:22:12.867 }, 00:22:12.867 { 00:22:12.867 "method": "nvmf_subsystem_add_host", 00:22:12.867 "params": { 00:22:12.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.867 "host": "nqn.2016-06.io.spdk:host1", 00:22:12.867 "psk": "/tmp/tmp.EfKRaa8e48" 00:22:12.867 } 00:22:12.867 }, 00:22:12.867 { 00:22:12.867 "method": "nvmf_subsystem_add_ns", 00:22:12.867 "params": { 00:22:12.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.867 "namespace": { 00:22:12.867 "nsid": 1, 00:22:12.867 "bdev_name": "malloc0", 00:22:12.867 "nguid": "EC8CF50510E74E678DD12FFFA1ED8CB9", 00:22:12.867 "uuid": "ec8cf505-10e7-4e67-8dd1-2fffa1ed8cb9", 00:22:12.867 "no_auto_visible": false 00:22:12.867 } 00:22:12.867 } 00:22:12.867 }, 00:22:12.867 { 00:22:12.867 "method": "nvmf_subsystem_add_listener", 00:22:12.867 "params": { 00:22:12.867 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.867 "listen_address": { 00:22:12.867 "trtype": "TCP", 00:22:12.867 "adrfam": "IPv4", 00:22:12.867 "traddr": "10.0.0.2", 00:22:12.867 "trsvcid": "4420" 00:22:12.867 }, 00:22:12.867 "secure_channel": true 00:22:12.867 } 00:22:12.867 } 00:22:12.867 ] 00:22:12.867 } 00:22:12.867 ] 00:22:12.867 }' 00:22:12.867 15:27:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=745829 00:22:12.867 15:27:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 745829 00:22:12.867 15:27:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:12.867 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 745829 ']' 00:22:12.867 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.867 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.867 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.867 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.867 15:27:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.867 [2024-07-15 15:27:22.416698] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:12.867 [2024-07-15 15:27:22.416751] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.867 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.127 [2024-07-15 15:27:22.486322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.127 [2024-07-15 15:27:22.549746] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.127 [2024-07-15 15:27:22.549783] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.127 [2024-07-15 15:27:22.549790] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.127 [2024-07-15 15:27:22.549796] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.127 [2024-07-15 15:27:22.549801] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.127 [2024-07-15 15:27:22.549857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.127 [2024-07-15 15:27:22.739003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.386 [2024-07-15 15:27:22.754944] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:13.386 [2024-07-15 15:27:22.771002] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:13.386 [2024-07-15 15:27:22.784093] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.647 15:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.647 15:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:13.647 15:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:13.647 15:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:13.647 15:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:13.647 15:27:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.647 15:27:23 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=745957 00:22:13.647 15:27:23 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 745957 /var/tmp/bdevperf.sock 00:22:13.647 15:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 745957 ']' 00:22:13.647 15:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.647 15:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:13.647 15:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.647 15:27:23 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:13.647 15:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:13.647 15:27:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:13.647 15:27:23 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:13.647 "subsystems": [ 00:22:13.647 { 00:22:13.647 "subsystem": "keyring", 00:22:13.647 "config": [] 00:22:13.647 }, 00:22:13.647 { 00:22:13.647 "subsystem": "iobuf", 00:22:13.647 "config": [ 00:22:13.647 { 00:22:13.647 "method": "iobuf_set_options", 00:22:13.647 "params": { 00:22:13.647 "small_pool_count": 8192, 00:22:13.647 "large_pool_count": 1024, 00:22:13.647 "small_bufsize": 8192, 00:22:13.647 "large_bufsize": 135168 00:22:13.647 } 00:22:13.647 } 00:22:13.647 ] 00:22:13.647 }, 00:22:13.647 { 00:22:13.647 "subsystem": "sock", 00:22:13.647 "config": [ 00:22:13.647 { 00:22:13.647 "method": "sock_set_default_impl", 00:22:13.647 "params": { 00:22:13.647 "impl_name": "posix" 00:22:13.647 } 00:22:13.647 }, 00:22:13.647 { 00:22:13.647 "method": "sock_impl_set_options", 00:22:13.647 "params": { 00:22:13.647 "impl_name": "ssl", 00:22:13.647 "recv_buf_size": 4096, 00:22:13.647 "send_buf_size": 4096, 00:22:13.647 "enable_recv_pipe": true, 00:22:13.647 "enable_quickack": false, 00:22:13.647 "enable_placement_id": 0, 00:22:13.647 "enable_zerocopy_send_server": true, 00:22:13.647 "enable_zerocopy_send_client": false, 00:22:13.647 "zerocopy_threshold": 0, 00:22:13.647 "tls_version": 0, 00:22:13.647 "enable_ktls": false 00:22:13.647 } 00:22:13.647 }, 00:22:13.647 { 00:22:13.647 "method": "sock_impl_set_options", 00:22:13.647 "params": { 00:22:13.647 "impl_name": "posix", 00:22:13.647 "recv_buf_size": 2097152, 00:22:13.647 "send_buf_size": 2097152, 00:22:13.647 "enable_recv_pipe": true, 00:22:13.647 "enable_quickack": false, 00:22:13.647 "enable_placement_id": 0, 00:22:13.647 "enable_zerocopy_send_server": true, 00:22:13.647 "enable_zerocopy_send_client": false, 00:22:13.647 "zerocopy_threshold": 0, 00:22:13.647 "tls_version": 0, 00:22:13.647 "enable_ktls": false 00:22:13.647 } 00:22:13.647 } 00:22:13.647 ] 00:22:13.647 }, 00:22:13.647 { 00:22:13.647 "subsystem": "vmd", 00:22:13.647 "config": [] 00:22:13.647 }, 00:22:13.647 { 00:22:13.647 "subsystem": "accel", 00:22:13.647 "config": [ 00:22:13.647 { 00:22:13.647 "method": "accel_set_options", 00:22:13.647 "params": { 00:22:13.647 "small_cache_size": 128, 00:22:13.647 "large_cache_size": 16, 00:22:13.647 "task_count": 2048, 00:22:13.647 "sequence_count": 2048, 00:22:13.647 "buf_count": 2048 00:22:13.647 } 00:22:13.647 } 00:22:13.647 ] 00:22:13.647 }, 00:22:13.647 { 00:22:13.647 "subsystem": "bdev", 00:22:13.647 "config": [ 00:22:13.647 { 00:22:13.647 "method": "bdev_set_options", 00:22:13.647 "params": { 00:22:13.647 "bdev_io_pool_size": 65535, 00:22:13.647 "bdev_io_cache_size": 256, 00:22:13.647 "bdev_auto_examine": true, 00:22:13.647 "iobuf_small_cache_size": 128, 00:22:13.647 "iobuf_large_cache_size": 16 00:22:13.647 } 00:22:13.647 }, 00:22:13.647 { 00:22:13.647 "method": "bdev_raid_set_options", 00:22:13.647 "params": { 00:22:13.647 "process_window_size_kb": 1024 00:22:13.647 } 00:22:13.647 }, 00:22:13.647 { 00:22:13.647 "method": "bdev_iscsi_set_options", 00:22:13.647 "params": { 00:22:13.647 "timeout_sec": 30 00:22:13.647 } 00:22:13.647 }, 00:22:13.647 { 00:22:13.647 "method": "bdev_nvme_set_options", 00:22:13.647 "params": { 00:22:13.647 "action_on_timeout": "none", 00:22:13.647 "timeout_us": 0, 00:22:13.647 "timeout_admin_us": 0, 00:22:13.647 "keep_alive_timeout_ms": 10000, 00:22:13.647 "arbitration_burst": 0, 00:22:13.647 "low_priority_weight": 0, 00:22:13.647 "medium_priority_weight": 0, 00:22:13.647 "high_priority_weight": 0, 00:22:13.647 "nvme_adminq_poll_period_us": 10000, 00:22:13.647 "nvme_ioq_poll_period_us": 0, 00:22:13.647 "io_queue_requests": 512, 00:22:13.647 "delay_cmd_submit": true, 00:22:13.647 "transport_retry_count": 4, 00:22:13.647 "bdev_retry_count": 3, 00:22:13.647 "transport_ack_timeout": 0, 00:22:13.647 "ctrlr_loss_timeout_sec": 0, 00:22:13.647 "reconnect_delay_sec": 0, 00:22:13.647 "fast_io_fail_timeout_sec": 0, 00:22:13.647 "disable_auto_failback": false, 00:22:13.647 "generate_uuids": false, 00:22:13.647 "transport_tos": 0, 00:22:13.647 "nvme_error_stat": false, 00:22:13.647 "rdma_srq_size": 0, 00:22:13.647 "io_path_stat": false, 00:22:13.647 "allow_accel_sequence": false, 00:22:13.647 "rdma_max_cq_size": 0, 00:22:13.647 "rdma_cm_event_timeout_ms": 0, 00:22:13.647 "dhchap_digests": [ 00:22:13.647 "sha256", 00:22:13.647 "sha384", 00:22:13.647 "sha512" 00:22:13.647 ], 00:22:13.647 "dhchap_dhgroups": [ 00:22:13.647 "null", 00:22:13.647 "ffdhe2048", 00:22:13.647 "ffdhe3072", 00:22:13.647 "ffdhe4096", 00:22:13.647 "ffdhe6144", 00:22:13.647 "ffdhe8192" 00:22:13.647 ] 00:22:13.647 } 00:22:13.647 }, 00:22:13.647 { 00:22:13.647 "method": "bdev_nvme_attach_controller", 00:22:13.647 "params": { 00:22:13.647 "name": "TLSTEST", 00:22:13.647 "trtype": "TCP", 00:22:13.647 "adrfam": "IPv4", 00:22:13.647 "traddr": "10.0.0.2", 00:22:13.647 "trsvcid": "4420", 00:22:13.647 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.647 "prchk_reftag": false, 00:22:13.647 "prchk_guard": false, 00:22:13.647 "ctrlr_loss_timeout_sec": 0, 00:22:13.647 "reconnect_delay_sec": 0, 00:22:13.647 "fast_io_fail_timeout_sec": 0, 00:22:13.647 "psk": "/tmp/tmp.EfKRaa8e48", 00:22:13.647 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:13.647 "hdgst": false, 00:22:13.647 "ddgst": false 00:22:13.647 } 00:22:13.647 }, 00:22:13.647 { 00:22:13.647 "method": "bdev_nvme_set_hotplug", 00:22:13.647 "params": { 00:22:13.647 "period_us": 100000, 00:22:13.647 "enable": false 00:22:13.647 } 00:22:13.647 }, 00:22:13.647 { 00:22:13.647 "method": "bdev_wait_for_examine" 00:22:13.647 } 00:22:13.647 ] 00:22:13.647 }, 00:22:13.647 { 00:22:13.647 "subsystem": "nbd", 00:22:13.647 "config": [] 00:22:13.647 } 00:22:13.647 ] 00:22:13.647 }' 00:22:13.647 [2024-07-15 15:27:23.246474] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:13.647 [2024-07-15 15:27:23.246525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid745957 ] 00:22:13.906 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.906 [2024-07-15 15:27:23.300070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.906 [2024-07-15 15:27:23.352039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.906 [2024-07-15 15:27:23.476645] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:13.907 [2024-07-15 15:27:23.476708] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:14.475 15:27:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.475 15:27:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:14.475 15:27:24 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:14.475 Running I/O for 10 seconds... 00:22:24.540 00:22:24.540 Latency(us) 00:22:24.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.540 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:24.540 Verification LBA range: start 0x0 length 0x2000 00:22:24.540 TLSTESTn1 : 10.02 4431.59 17.31 0.00 0.00 28848.18 7208.96 60730.03 00:22:24.540 =================================================================================================================== 00:22:24.540 Total : 4431.59 17.31 0.00 0.00 28848.18 7208.96 60730.03 00:22:24.540 0 00:22:24.540 15:27:34 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:24.540 15:27:34 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 745957 00:22:24.540 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 745957 ']' 00:22:24.540 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 745957 00:22:24.540 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 745957 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 745957' 00:22:24.800 killing process with pid 745957 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 745957 00:22:24.800 Received shutdown signal, test time was about 10.000000 seconds 00:22:24.800 00:22:24.800 Latency(us) 00:22:24.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.800 =================================================================================================================== 00:22:24.800 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:24.800 [2024-07-15 15:27:34.212289] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 745957 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 745829 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 745829 ']' 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 745829 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 745829 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 745829' 00:22:24.800 killing process with pid 745829 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 745829 00:22:24.800 [2024-07-15 15:27:34.379861] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:24.800 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 745829 00:22:25.059 15:27:34 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:25.059 15:27:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:25.059 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:25.059 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.059 15:27:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=748206 00:22:25.059 15:27:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:25.059 15:27:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 748206 00:22:25.059 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 748206 ']' 00:22:25.059 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.059 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.059 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.059 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.059 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.059 [2024-07-15 15:27:34.581992] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:25.059 [2024-07-15 15:27:34.582045] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.059 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.059 [2024-07-15 15:27:34.652981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.319 [2024-07-15 15:27:34.716649] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.319 [2024-07-15 15:27:34.716686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.319 [2024-07-15 15:27:34.716693] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.319 [2024-07-15 15:27:34.716699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.319 [2024-07-15 15:27:34.716705] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.319 [2024-07-15 15:27:34.716726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.319 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:25.319 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:25.319 15:27:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:25.319 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:25.319 15:27:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.319 15:27:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.319 15:27:34 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.EfKRaa8e48 00:22:25.319 15:27:34 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.EfKRaa8e48 00:22:25.319 15:27:34 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:25.579 [2024-07-15 15:27:34.981960] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.579 15:27:34 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:25.579 15:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:25.839 [2024-07-15 15:27:35.282705] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:25.839 [2024-07-15 15:27:35.282915] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.839 15:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:25.839 malloc0 00:22:25.839 15:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:26.099 15:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EfKRaa8e48 00:22:26.358 [2024-07-15 15:27:35.730652] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:26.358 15:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:26.358 15:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=748528 00:22:26.358 15:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:26.358 15:27:35 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 748528 /var/tmp/bdevperf.sock 00:22:26.358 15:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 748528 ']' 00:22:26.358 15:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.358 15:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:26.358 15:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.358 15:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:26.358 15:27:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.358 [2024-07-15 15:27:35.780107] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:26.358 [2024-07-15 15:27:35.780156] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748528 ] 00:22:26.358 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.358 [2024-07-15 15:27:35.843161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.358 [2024-07-15 15:27:35.906996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.296 15:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:27.296 15:27:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:27.296 15:27:36 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EfKRaa8e48 00:22:27.296 15:27:36 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:27.296 [2024-07-15 15:27:36.849903] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.560 nvme0n1 00:22:27.560 15:27:36 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:27.560 Running I/O for 1 seconds... 00:22:28.500 00:22:28.500 Latency(us) 00:22:28.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.500 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:28.500 Verification LBA range: start 0x0 length 0x2000 00:22:28.500 nvme0n1 : 1.05 3364.82 13.14 0.00 0.00 37246.21 8574.29 41506.13 00:22:28.500 =================================================================================================================== 00:22:28.500 Total : 3364.82 13.14 0.00 0.00 37246.21 8574.29 41506.13 00:22:28.500 0 00:22:28.500 15:27:38 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 748528 00:22:28.500 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 748528 ']' 00:22:28.500 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 748528 00:22:28.500 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:28.500 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:28.500 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 748528 00:22:28.760 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:28.760 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:28.760 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 748528' 00:22:28.760 killing process with pid 748528 00:22:28.760 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 748528 00:22:28.760 Received shutdown signal, test time was about 1.000000 seconds 00:22:28.760 00:22:28.760 Latency(us) 00:22:28.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.760 =================================================================================================================== 00:22:28.760 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:28.760 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 748528 00:22:28.760 15:27:38 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 748206 00:22:28.760 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 748206 ']' 00:22:28.760 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 748206 00:22:28.760 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:28.760 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:28.760 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 748206 00:22:28.760 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:28.760 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:28.760 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 748206' 00:22:28.760 killing process with pid 748206 00:22:28.760 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 748206 00:22:28.760 [2024-07-15 15:27:38.311041] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:28.760 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 748206 00:22:29.020 15:27:38 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:22:29.020 15:27:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:29.020 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:29.020 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.020 15:27:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=748930 00:22:29.020 15:27:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 748930 00:22:29.020 15:27:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:29.020 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 748930 ']' 00:22:29.020 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.020 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.020 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.020 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.020 15:27:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.020 [2024-07-15 15:27:38.509448] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:29.020 [2024-07-15 15:27:38.509495] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.020 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.020 [2024-07-15 15:27:38.579324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.279 [2024-07-15 15:27:38.643156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.279 [2024-07-15 15:27:38.643194] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.279 [2024-07-15 15:27:38.643202] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.279 [2024-07-15 15:27:38.643212] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.279 [2024-07-15 15:27:38.643218] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.279 [2024-07-15 15:27:38.643237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.848 [2024-07-15 15:27:39.313713] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.848 malloc0 00:22:29.848 [2024-07-15 15:27:39.340543] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:29.848 [2024-07-15 15:27:39.340740] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=749279 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 749279 /var/tmp/bdevperf.sock 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 749279 ']' 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:29.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:29.848 15:27:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.848 [2024-07-15 15:27:39.416168] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:29.848 [2024-07-15 15:27:39.416214] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749279 ] 00:22:29.848 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.108 [2024-07-15 15:27:39.477781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.108 [2024-07-15 15:27:39.541486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.678 15:27:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:30.678 15:27:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:30.678 15:27:40 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EfKRaa8e48 00:22:30.938 15:27:40 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:30.938 [2024-07-15 15:27:40.492010] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:31.197 nvme0n1 00:22:31.197 15:27:40 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:31.197 Running I/O for 1 seconds... 00:22:32.136 00:22:32.136 Latency(us) 00:22:32.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.136 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:32.136 Verification LBA range: start 0x0 length 0x2000 00:22:32.136 nvme0n1 : 1.05 3159.75 12.34 0.00 0.00 39655.16 6116.69 110100.48 00:22:32.136 =================================================================================================================== 00:22:32.136 Total : 3159.75 12.34 0.00 0.00 39655.16 6116.69 110100.48 00:22:32.136 0 00:22:32.136 15:27:41 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:32.136 15:27:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.136 15:27:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.396 15:27:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.396 15:27:41 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:32.396 "subsystems": [ 00:22:32.396 { 00:22:32.396 "subsystem": "keyring", 00:22:32.396 "config": [ 00:22:32.396 { 00:22:32.396 "method": "keyring_file_add_key", 00:22:32.396 "params": { 00:22:32.396 "name": "key0", 00:22:32.396 "path": "/tmp/tmp.EfKRaa8e48" 00:22:32.396 } 00:22:32.396 } 00:22:32.396 ] 00:22:32.396 }, 00:22:32.396 { 00:22:32.396 "subsystem": "iobuf", 00:22:32.396 "config": [ 00:22:32.396 { 00:22:32.396 "method": "iobuf_set_options", 00:22:32.396 "params": { 00:22:32.396 "small_pool_count": 8192, 00:22:32.396 "large_pool_count": 1024, 00:22:32.396 "small_bufsize": 8192, 00:22:32.396 "large_bufsize": 135168 00:22:32.396 } 00:22:32.396 } 00:22:32.396 ] 00:22:32.396 }, 00:22:32.396 { 00:22:32.396 "subsystem": "sock", 00:22:32.396 "config": [ 00:22:32.396 { 00:22:32.396 "method": "sock_set_default_impl", 00:22:32.396 "params": { 00:22:32.396 "impl_name": "posix" 00:22:32.396 } 00:22:32.396 }, 00:22:32.396 { 00:22:32.396 "method": "sock_impl_set_options", 00:22:32.396 "params": { 00:22:32.396 "impl_name": "ssl", 00:22:32.396 "recv_buf_size": 4096, 00:22:32.396 "send_buf_size": 4096, 00:22:32.396 "enable_recv_pipe": true, 00:22:32.396 "enable_quickack": false, 00:22:32.396 "enable_placement_id": 0, 00:22:32.396 "enable_zerocopy_send_server": true, 00:22:32.396 "enable_zerocopy_send_client": false, 00:22:32.396 "zerocopy_threshold": 0, 00:22:32.396 "tls_version": 0, 00:22:32.396 "enable_ktls": false 00:22:32.396 } 00:22:32.396 }, 00:22:32.396 { 00:22:32.396 "method": "sock_impl_set_options", 00:22:32.396 "params": { 00:22:32.396 "impl_name": "posix", 00:22:32.396 "recv_buf_size": 2097152, 00:22:32.396 "send_buf_size": 2097152, 00:22:32.396 "enable_recv_pipe": true, 00:22:32.396 "enable_quickack": false, 00:22:32.396 "enable_placement_id": 0, 00:22:32.396 "enable_zerocopy_send_server": true, 00:22:32.396 "enable_zerocopy_send_client": false, 00:22:32.396 "zerocopy_threshold": 0, 00:22:32.396 "tls_version": 0, 00:22:32.396 "enable_ktls": false 00:22:32.396 } 00:22:32.396 } 00:22:32.396 ] 00:22:32.396 }, 00:22:32.396 { 00:22:32.396 "subsystem": "vmd", 00:22:32.396 "config": [] 00:22:32.396 }, 00:22:32.396 { 00:22:32.396 "subsystem": "accel", 00:22:32.396 "config": [ 00:22:32.396 { 00:22:32.396 "method": "accel_set_options", 00:22:32.396 "params": { 00:22:32.396 "small_cache_size": 128, 00:22:32.396 "large_cache_size": 16, 00:22:32.396 "task_count": 2048, 00:22:32.396 "sequence_count": 2048, 00:22:32.396 "buf_count": 2048 00:22:32.396 } 00:22:32.396 } 00:22:32.396 ] 00:22:32.396 }, 00:22:32.396 { 00:22:32.396 "subsystem": "bdev", 00:22:32.396 "config": [ 00:22:32.396 { 00:22:32.397 "method": "bdev_set_options", 00:22:32.397 "params": { 00:22:32.397 "bdev_io_pool_size": 65535, 00:22:32.397 "bdev_io_cache_size": 256, 00:22:32.397 "bdev_auto_examine": true, 00:22:32.397 "iobuf_small_cache_size": 128, 00:22:32.397 "iobuf_large_cache_size": 16 00:22:32.397 } 00:22:32.397 }, 00:22:32.397 { 00:22:32.397 "method": "bdev_raid_set_options", 00:22:32.397 "params": { 00:22:32.397 "process_window_size_kb": 1024 00:22:32.397 } 00:22:32.397 }, 00:22:32.397 { 00:22:32.397 "method": "bdev_iscsi_set_options", 00:22:32.397 "params": { 00:22:32.397 "timeout_sec": 30 00:22:32.397 } 00:22:32.397 }, 00:22:32.397 { 00:22:32.397 "method": "bdev_nvme_set_options", 00:22:32.397 "params": { 00:22:32.397 "action_on_timeout": "none", 00:22:32.397 "timeout_us": 0, 00:22:32.397 "timeout_admin_us": 0, 00:22:32.397 "keep_alive_timeout_ms": 10000, 00:22:32.397 "arbitration_burst": 0, 00:22:32.397 "low_priority_weight": 0, 00:22:32.397 "medium_priority_weight": 0, 00:22:32.397 "high_priority_weight": 0, 00:22:32.397 "nvme_adminq_poll_period_us": 10000, 00:22:32.397 "nvme_ioq_poll_period_us": 0, 00:22:32.397 "io_queue_requests": 0, 00:22:32.397 "delay_cmd_submit": true, 00:22:32.397 "transport_retry_count": 4, 00:22:32.397 "bdev_retry_count": 3, 00:22:32.397 "transport_ack_timeout": 0, 00:22:32.397 "ctrlr_loss_timeout_sec": 0, 00:22:32.397 "reconnect_delay_sec": 0, 00:22:32.397 "fast_io_fail_timeout_sec": 0, 00:22:32.397 "disable_auto_failback": false, 00:22:32.397 "generate_uuids": false, 00:22:32.397 "transport_tos": 0, 00:22:32.397 "nvme_error_stat": false, 00:22:32.397 "rdma_srq_size": 0, 00:22:32.397 "io_path_stat": false, 00:22:32.397 "allow_accel_sequence": false, 00:22:32.397 "rdma_max_cq_size": 0, 00:22:32.397 "rdma_cm_event_timeout_ms": 0, 00:22:32.397 "dhchap_digests": [ 00:22:32.397 "sha256", 00:22:32.397 "sha384", 00:22:32.397 "sha512" 00:22:32.397 ], 00:22:32.397 "dhchap_dhgroups": [ 00:22:32.397 "null", 00:22:32.397 "ffdhe2048", 00:22:32.397 "ffdhe3072", 00:22:32.397 "ffdhe4096", 00:22:32.397 "ffdhe6144", 00:22:32.397 "ffdhe8192" 00:22:32.397 ] 00:22:32.397 } 00:22:32.397 }, 00:22:32.397 { 00:22:32.397 "method": "bdev_nvme_set_hotplug", 00:22:32.397 "params": { 00:22:32.397 "period_us": 100000, 00:22:32.397 "enable": false 00:22:32.397 } 00:22:32.397 }, 00:22:32.397 { 00:22:32.397 "method": "bdev_malloc_create", 00:22:32.397 "params": { 00:22:32.397 "name": "malloc0", 00:22:32.397 "num_blocks": 8192, 00:22:32.397 "block_size": 4096, 00:22:32.397 "physical_block_size": 4096, 00:22:32.397 "uuid": "f1ce5bc5-04e6-4a0e-8f81-300b7c51b767", 00:22:32.397 "optimal_io_boundary": 0 00:22:32.397 } 00:22:32.397 }, 00:22:32.397 { 00:22:32.397 "method": "bdev_wait_for_examine" 00:22:32.397 } 00:22:32.397 ] 00:22:32.397 }, 00:22:32.397 { 00:22:32.397 "subsystem": "nbd", 00:22:32.397 "config": [] 00:22:32.397 }, 00:22:32.397 { 00:22:32.397 "subsystem": "scheduler", 00:22:32.397 "config": [ 00:22:32.397 { 00:22:32.397 "method": "framework_set_scheduler", 00:22:32.397 "params": { 00:22:32.397 "name": "static" 00:22:32.397 } 00:22:32.397 } 00:22:32.397 ] 00:22:32.397 }, 00:22:32.397 { 00:22:32.397 "subsystem": "nvmf", 00:22:32.397 "config": [ 00:22:32.397 { 00:22:32.397 "method": "nvmf_set_config", 00:22:32.397 "params": { 00:22:32.397 "discovery_filter": "match_any", 00:22:32.397 "admin_cmd_passthru": { 00:22:32.397 "identify_ctrlr": false 00:22:32.397 } 00:22:32.397 } 00:22:32.397 }, 00:22:32.397 { 00:22:32.397 "method": "nvmf_set_max_subsystems", 00:22:32.397 "params": { 00:22:32.397 "max_subsystems": 1024 00:22:32.397 } 00:22:32.397 }, 00:22:32.397 { 00:22:32.397 "method": "nvmf_set_crdt", 00:22:32.397 "params": { 00:22:32.397 "crdt1": 0, 00:22:32.397 "crdt2": 0, 00:22:32.397 "crdt3": 0 00:22:32.397 } 00:22:32.397 }, 00:22:32.397 { 00:22:32.397 "method": "nvmf_create_transport", 00:22:32.397 "params": { 00:22:32.397 "trtype": "TCP", 00:22:32.397 "max_queue_depth": 128, 00:22:32.397 "max_io_qpairs_per_ctrlr": 127, 00:22:32.397 "in_capsule_data_size": 4096, 00:22:32.397 "max_io_size": 131072, 00:22:32.397 "io_unit_size": 131072, 00:22:32.397 "max_aq_depth": 128, 00:22:32.397 "num_shared_buffers": 511, 00:22:32.397 "buf_cache_size": 4294967295, 00:22:32.397 "dif_insert_or_strip": false, 00:22:32.397 "zcopy": false, 00:22:32.397 "c2h_success": false, 00:22:32.397 "sock_priority": 0, 00:22:32.397 "abort_timeout_sec": 1, 00:22:32.397 "ack_timeout": 0, 00:22:32.397 "data_wr_pool_size": 0 00:22:32.397 } 00:22:32.397 }, 00:22:32.397 { 00:22:32.397 "method": "nvmf_create_subsystem", 00:22:32.397 "params": { 00:22:32.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.397 "allow_any_host": false, 00:22:32.397 "serial_number": "00000000000000000000", 00:22:32.397 "model_number": "SPDK bdev Controller", 00:22:32.397 "max_namespaces": 32, 00:22:32.397 "min_cntlid": 1, 00:22:32.397 "max_cntlid": 65519, 00:22:32.397 "ana_reporting": false 00:22:32.397 } 00:22:32.397 }, 00:22:32.397 { 00:22:32.397 "method": "nvmf_subsystem_add_host", 00:22:32.397 "params": { 00:22:32.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.397 "host": "nqn.2016-06.io.spdk:host1", 00:22:32.397 "psk": "key0" 00:22:32.397 } 00:22:32.397 }, 00:22:32.397 { 00:22:32.397 "method": "nvmf_subsystem_add_ns", 00:22:32.397 "params": { 00:22:32.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.397 "namespace": { 00:22:32.397 "nsid": 1, 00:22:32.397 "bdev_name": "malloc0", 00:22:32.397 "nguid": "F1CE5BC504E64A0E8F81300B7C51B767", 00:22:32.397 "uuid": "f1ce5bc5-04e6-4a0e-8f81-300b7c51b767", 00:22:32.397 "no_auto_visible": false 00:22:32.397 } 00:22:32.397 } 00:22:32.397 }, 00:22:32.397 { 00:22:32.397 "method": "nvmf_subsystem_add_listener", 00:22:32.397 "params": { 00:22:32.397 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.397 "listen_address": { 00:22:32.397 "trtype": "TCP", 00:22:32.397 "adrfam": "IPv4", 00:22:32.397 "traddr": "10.0.0.2", 00:22:32.397 "trsvcid": "4420" 00:22:32.397 }, 00:22:32.397 "secure_channel": false, 00:22:32.397 "sock_impl": "ssl" 00:22:32.397 } 00:22:32.397 } 00:22:32.397 ] 00:22:32.397 } 00:22:32.397 ] 00:22:32.397 }' 00:22:32.397 15:27:41 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:32.658 15:27:42 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:32.658 "subsystems": [ 00:22:32.658 { 00:22:32.658 "subsystem": "keyring", 00:22:32.658 "config": [ 00:22:32.658 { 00:22:32.658 "method": "keyring_file_add_key", 00:22:32.658 "params": { 00:22:32.658 "name": "key0", 00:22:32.658 "path": "/tmp/tmp.EfKRaa8e48" 00:22:32.658 } 00:22:32.658 } 00:22:32.658 ] 00:22:32.658 }, 00:22:32.658 { 00:22:32.658 "subsystem": "iobuf", 00:22:32.658 "config": [ 00:22:32.658 { 00:22:32.658 "method": "iobuf_set_options", 00:22:32.658 "params": { 00:22:32.658 "small_pool_count": 8192, 00:22:32.658 "large_pool_count": 1024, 00:22:32.658 "small_bufsize": 8192, 00:22:32.658 "large_bufsize": 135168 00:22:32.658 } 00:22:32.658 } 00:22:32.658 ] 00:22:32.658 }, 00:22:32.658 { 00:22:32.658 "subsystem": "sock", 00:22:32.658 "config": [ 00:22:32.658 { 00:22:32.658 "method": "sock_set_default_impl", 00:22:32.658 "params": { 00:22:32.658 "impl_name": "posix" 00:22:32.658 } 00:22:32.658 }, 00:22:32.658 { 00:22:32.658 "method": "sock_impl_set_options", 00:22:32.658 "params": { 00:22:32.658 "impl_name": "ssl", 00:22:32.658 "recv_buf_size": 4096, 00:22:32.658 "send_buf_size": 4096, 00:22:32.658 "enable_recv_pipe": true, 00:22:32.658 "enable_quickack": false, 00:22:32.658 "enable_placement_id": 0, 00:22:32.658 "enable_zerocopy_send_server": true, 00:22:32.658 "enable_zerocopy_send_client": false, 00:22:32.658 "zerocopy_threshold": 0, 00:22:32.658 "tls_version": 0, 00:22:32.658 "enable_ktls": false 00:22:32.658 } 00:22:32.658 }, 00:22:32.658 { 00:22:32.658 "method": "sock_impl_set_options", 00:22:32.658 "params": { 00:22:32.658 "impl_name": "posix", 00:22:32.658 "recv_buf_size": 2097152, 00:22:32.658 "send_buf_size": 2097152, 00:22:32.658 "enable_recv_pipe": true, 00:22:32.658 "enable_quickack": false, 00:22:32.658 "enable_placement_id": 0, 00:22:32.658 "enable_zerocopy_send_server": true, 00:22:32.658 "enable_zerocopy_send_client": false, 00:22:32.658 "zerocopy_threshold": 0, 00:22:32.658 "tls_version": 0, 00:22:32.658 "enable_ktls": false 00:22:32.658 } 00:22:32.658 } 00:22:32.658 ] 00:22:32.658 }, 00:22:32.658 { 00:22:32.658 "subsystem": "vmd", 00:22:32.658 "config": [] 00:22:32.658 }, 00:22:32.658 { 00:22:32.658 "subsystem": "accel", 00:22:32.658 "config": [ 00:22:32.658 { 00:22:32.658 "method": "accel_set_options", 00:22:32.658 "params": { 00:22:32.658 "small_cache_size": 128, 00:22:32.658 "large_cache_size": 16, 00:22:32.658 "task_count": 2048, 00:22:32.658 "sequence_count": 2048, 00:22:32.658 "buf_count": 2048 00:22:32.658 } 00:22:32.658 } 00:22:32.658 ] 00:22:32.658 }, 00:22:32.658 { 00:22:32.658 "subsystem": "bdev", 00:22:32.658 "config": [ 00:22:32.658 { 00:22:32.658 "method": "bdev_set_options", 00:22:32.658 "params": { 00:22:32.658 "bdev_io_pool_size": 65535, 00:22:32.658 "bdev_io_cache_size": 256, 00:22:32.658 "bdev_auto_examine": true, 00:22:32.658 "iobuf_small_cache_size": 128, 00:22:32.658 "iobuf_large_cache_size": 16 00:22:32.658 } 00:22:32.658 }, 00:22:32.658 { 00:22:32.658 "method": "bdev_raid_set_options", 00:22:32.658 "params": { 00:22:32.658 "process_window_size_kb": 1024 00:22:32.658 } 00:22:32.658 }, 00:22:32.658 { 00:22:32.658 "method": "bdev_iscsi_set_options", 00:22:32.658 "params": { 00:22:32.658 "timeout_sec": 30 00:22:32.658 } 00:22:32.658 }, 00:22:32.658 { 00:22:32.658 "method": "bdev_nvme_set_options", 00:22:32.658 "params": { 00:22:32.659 "action_on_timeout": "none", 00:22:32.659 "timeout_us": 0, 00:22:32.659 "timeout_admin_us": 0, 00:22:32.659 "keep_alive_timeout_ms": 10000, 00:22:32.659 "arbitration_burst": 0, 00:22:32.659 "low_priority_weight": 0, 00:22:32.659 "medium_priority_weight": 0, 00:22:32.659 "high_priority_weight": 0, 00:22:32.659 "nvme_adminq_poll_period_us": 10000, 00:22:32.659 "nvme_ioq_poll_period_us": 0, 00:22:32.659 "io_queue_requests": 512, 00:22:32.659 "delay_cmd_submit": true, 00:22:32.659 "transport_retry_count": 4, 00:22:32.659 "bdev_retry_count": 3, 00:22:32.659 "transport_ack_timeout": 0, 00:22:32.659 "ctrlr_loss_timeout_sec": 0, 00:22:32.659 "reconnect_delay_sec": 0, 00:22:32.659 "fast_io_fail_timeout_sec": 0, 00:22:32.659 "disable_auto_failback": false, 00:22:32.659 "generate_uuids": false, 00:22:32.659 "transport_tos": 0, 00:22:32.659 "nvme_error_stat": false, 00:22:32.659 "rdma_srq_size": 0, 00:22:32.659 "io_path_stat": false, 00:22:32.659 "allow_accel_sequence": false, 00:22:32.659 "rdma_max_cq_size": 0, 00:22:32.659 "rdma_cm_event_timeout_ms": 0, 00:22:32.659 "dhchap_digests": [ 00:22:32.659 "sha256", 00:22:32.659 "sha384", 00:22:32.659 "sha512" 00:22:32.659 ], 00:22:32.659 "dhchap_dhgroups": [ 00:22:32.659 "null", 00:22:32.659 "ffdhe2048", 00:22:32.659 "ffdhe3072", 00:22:32.659 "ffdhe4096", 00:22:32.659 "ffdhe6144", 00:22:32.659 "ffdhe8192" 00:22:32.659 ] 00:22:32.659 } 00:22:32.659 }, 00:22:32.659 { 00:22:32.659 "method": "bdev_nvme_attach_controller", 00:22:32.659 "params": { 00:22:32.659 "name": "nvme0", 00:22:32.659 "trtype": "TCP", 00:22:32.659 "adrfam": "IPv4", 00:22:32.659 "traddr": "10.0.0.2", 00:22:32.659 "trsvcid": "4420", 00:22:32.659 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.659 "prchk_reftag": false, 00:22:32.659 "prchk_guard": false, 00:22:32.659 "ctrlr_loss_timeout_sec": 0, 00:22:32.659 "reconnect_delay_sec": 0, 00:22:32.659 "fast_io_fail_timeout_sec": 0, 00:22:32.659 "psk": "key0", 00:22:32.659 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.659 "hdgst": false, 00:22:32.659 "ddgst": false 00:22:32.659 } 00:22:32.659 }, 00:22:32.659 { 00:22:32.659 "method": "bdev_nvme_set_hotplug", 00:22:32.659 "params": { 00:22:32.659 "period_us": 100000, 00:22:32.659 "enable": false 00:22:32.659 } 00:22:32.659 }, 00:22:32.659 { 00:22:32.659 "method": "bdev_enable_histogram", 00:22:32.659 "params": { 00:22:32.659 "name": "nvme0n1", 00:22:32.659 "enable": true 00:22:32.659 } 00:22:32.659 }, 00:22:32.659 { 00:22:32.659 "method": "bdev_wait_for_examine" 00:22:32.659 } 00:22:32.659 ] 00:22:32.659 }, 00:22:32.659 { 00:22:32.659 "subsystem": "nbd", 00:22:32.659 "config": [] 00:22:32.659 } 00:22:32.659 ] 00:22:32.659 }' 00:22:32.659 15:27:42 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 749279 00:22:32.659 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 749279 ']' 00:22:32.659 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 749279 00:22:32.659 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:32.659 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:32.659 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 749279 00:22:32.659 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:32.659 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:32.659 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 749279' 00:22:32.659 killing process with pid 749279 00:22:32.659 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 749279 00:22:32.659 Received shutdown signal, test time was about 1.000000 seconds 00:22:32.659 00:22:32.659 Latency(us) 00:22:32.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.659 =================================================================================================================== 00:22:32.659 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.659 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 749279 00:22:32.920 15:27:42 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 748930 00:22:32.920 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 748930 ']' 00:22:32.920 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 748930 00:22:32.920 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:32.920 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:32.920 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 748930 00:22:32.920 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:32.920 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:32.920 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 748930' 00:22:32.920 killing process with pid 748930 00:22:32.920 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 748930 00:22:32.920 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 748930 00:22:32.920 15:27:42 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:32.920 15:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.920 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:32.920 15:27:42 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:32.920 "subsystems": [ 00:22:32.920 { 00:22:32.920 "subsystem": "keyring", 00:22:32.920 "config": [ 00:22:32.920 { 00:22:32.920 "method": "keyring_file_add_key", 00:22:32.920 "params": { 00:22:32.920 "name": "key0", 00:22:32.920 "path": "/tmp/tmp.EfKRaa8e48" 00:22:32.920 } 00:22:32.920 } 00:22:32.920 ] 00:22:32.920 }, 00:22:32.920 { 00:22:32.920 "subsystem": "iobuf", 00:22:32.920 "config": [ 00:22:32.920 { 00:22:32.920 "method": "iobuf_set_options", 00:22:32.920 "params": { 00:22:32.920 "small_pool_count": 8192, 00:22:32.920 "large_pool_count": 1024, 00:22:32.920 "small_bufsize": 8192, 00:22:32.920 "large_bufsize": 135168 00:22:32.920 } 00:22:32.920 } 00:22:32.920 ] 00:22:32.920 }, 00:22:32.920 { 00:22:32.920 "subsystem": "sock", 00:22:32.920 "config": [ 00:22:32.920 { 00:22:32.920 "method": "sock_set_default_impl", 00:22:32.920 "params": { 00:22:32.920 "impl_name": "posix" 00:22:32.920 } 00:22:32.920 }, 00:22:32.920 { 00:22:32.920 "method": "sock_impl_set_options", 00:22:32.920 "params": { 00:22:32.920 "impl_name": "ssl", 00:22:32.920 "recv_buf_size": 4096, 00:22:32.920 "send_buf_size": 4096, 00:22:32.920 "enable_recv_pipe": true, 00:22:32.920 "enable_quickack": false, 00:22:32.920 "enable_placement_id": 0, 00:22:32.920 "enable_zerocopy_send_server": true, 00:22:32.920 "enable_zerocopy_send_client": false, 00:22:32.920 "zerocopy_threshold": 0, 00:22:32.920 "tls_version": 0, 00:22:32.920 "enable_ktls": false 00:22:32.920 } 00:22:32.920 }, 00:22:32.920 { 00:22:32.920 "method": "sock_impl_set_options", 00:22:32.920 "params": { 00:22:32.920 "impl_name": "posix", 00:22:32.920 "recv_buf_size": 2097152, 00:22:32.920 "send_buf_size": 2097152, 00:22:32.920 "enable_recv_pipe": true, 00:22:32.920 "enable_quickack": false, 00:22:32.920 "enable_placement_id": 0, 00:22:32.920 "enable_zerocopy_send_server": true, 00:22:32.921 "enable_zerocopy_send_client": false, 00:22:32.921 "zerocopy_threshold": 0, 00:22:32.921 "tls_version": 0, 00:22:32.921 "enable_ktls": false 00:22:32.921 } 00:22:32.921 } 00:22:32.921 ] 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "subsystem": "vmd", 00:22:32.921 "config": [] 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "subsystem": "accel", 00:22:32.921 "config": [ 00:22:32.921 { 00:22:32.921 "method": "accel_set_options", 00:22:32.921 "params": { 00:22:32.921 "small_cache_size": 128, 00:22:32.921 "large_cache_size": 16, 00:22:32.921 "task_count": 2048, 00:22:32.921 "sequence_count": 2048, 00:22:32.921 "buf_count": 2048 00:22:32.921 } 00:22:32.921 } 00:22:32.921 ] 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "subsystem": "bdev", 00:22:32.921 "config": [ 00:22:32.921 { 00:22:32.921 "method": "bdev_set_options", 00:22:32.921 "params": { 00:22:32.921 "bdev_io_pool_size": 65535, 00:22:32.921 "bdev_io_cache_size": 256, 00:22:32.921 "bdev_auto_examine": true, 00:22:32.921 "iobuf_small_cache_size": 128, 00:22:32.921 "iobuf_large_cache_size": 16 00:22:32.921 } 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "method": "bdev_raid_set_options", 00:22:32.921 "params": { 00:22:32.921 "process_window_size_kb": 1024 00:22:32.921 } 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "method": "bdev_iscsi_set_options", 00:22:32.921 "params": { 00:22:32.921 "timeout_sec": 30 00:22:32.921 } 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "method": "bdev_nvme_set_options", 00:22:32.921 "params": { 00:22:32.921 "action_on_timeout": "none", 00:22:32.921 "timeout_us": 0, 00:22:32.921 "timeout_admin_us": 0, 00:22:32.921 "keep_alive_timeout_ms": 10000, 00:22:32.921 "arbitration_burst": 0, 00:22:32.921 "low_priority_weight": 0, 00:22:32.921 "medium_priority_weight": 0, 00:22:32.921 "high_priority_weight": 0, 00:22:32.921 "nvme_adminq_poll_period_us": 10000, 00:22:32.921 "nvme_ioq_poll_period_us": 0, 00:22:32.921 "io_queue_requests": 0, 00:22:32.921 "delay_cmd_submit": true, 00:22:32.921 "transport_retry_count": 4, 00:22:32.921 "bdev_retry_count": 3, 00:22:32.921 "transport_ack_timeout": 0, 00:22:32.921 "ctrlr_loss_timeout_sec": 0, 00:22:32.921 "reconnect_delay_sec": 0, 00:22:32.921 "fast_io_fail_timeout_sec": 0, 00:22:32.921 "disable_auto_failback": false, 00:22:32.921 "generate_uuids": false, 00:22:32.921 "transport_tos": 0, 00:22:32.921 "nvme_error_stat": false, 00:22:32.921 "rdma_srq_size": 0, 00:22:32.921 "io_path_stat": false, 00:22:32.921 "allow_accel_sequence": false, 00:22:32.921 "rdma_max_cq_size": 0, 00:22:32.921 "rdma_cm_event_timeout_ms": 0, 00:22:32.921 "dhchap_digests": [ 00:22:32.921 "sha256", 00:22:32.921 "sha384", 00:22:32.921 "sha512" 00:22:32.921 ], 00:22:32.921 "dhchap_dhgroups": [ 00:22:32.921 "null", 00:22:32.921 "ffdhe2048", 00:22:32.921 "ffdhe3072", 00:22:32.921 "ffdhe4096", 00:22:32.921 "ffdhe6144", 00:22:32.921 "ffdhe8192" 00:22:32.921 ] 00:22:32.921 } 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "method": "bdev_nvme_set_hotplug", 00:22:32.921 "params": { 00:22:32.921 "period_us": 100000, 00:22:32.921 "enable": false 00:22:32.921 } 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "method": "bdev_malloc_create", 00:22:32.921 "params": { 00:22:32.921 "name": "malloc0", 00:22:32.921 "num_blocks": 8192, 00:22:32.921 "block_size": 4096, 00:22:32.921 "physical_block_size": 4096, 00:22:32.921 "uuid": "f1ce5bc5-04e6-4a0e-8f81-300b7c51b767", 00:22:32.921 "optimal_io_boundary": 0 00:22:32.921 } 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "method": "bdev_wait_for_examine" 00:22:32.921 } 00:22:32.921 ] 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "subsystem": "nbd", 00:22:32.921 "config": [] 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "subsystem": "scheduler", 00:22:32.921 "config": [ 00:22:32.921 { 00:22:32.921 "method": "framework_set_scheduler", 00:22:32.921 "params": { 00:22:32.921 "name": "static" 00:22:32.921 } 00:22:32.921 } 00:22:32.921 ] 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "subsystem": "nvmf", 00:22:32.921 "config": [ 00:22:32.921 { 00:22:32.921 "method": "nvmf_set_config", 00:22:32.921 "params": { 00:22:32.921 "discovery_filter": "match_any", 00:22:32.921 "admin_cmd_passthru": { 00:22:32.921 "identify_ctrlr": false 00:22:32.921 } 00:22:32.921 } 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "method": "nvmf_set_max_subsystems", 00:22:32.921 "params": { 00:22:32.921 "max_subsystems": 1024 00:22:32.921 } 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "method": "nvmf_set_crdt", 00:22:32.921 "params": { 00:22:32.921 "crdt1": 0, 00:22:32.921 "crdt2": 0, 00:22:32.921 "crdt3": 0 00:22:32.921 } 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "method": "nvmf_create_transport", 00:22:32.921 "params": { 00:22:32.921 "trtype": "TCP", 00:22:32.921 "max_queue_depth": 128, 00:22:32.921 "max_io_qpairs_per_ctrlr": 127, 00:22:32.921 "in_capsule_data_size": 4096, 00:22:32.921 "max_io_size": 131072, 00:22:32.921 "io_unit_size": 131072, 00:22:32.921 "max_aq_depth": 128, 00:22:32.921 "num_shared_buffers": 511, 00:22:32.921 "buf_cache_size": 4294967295, 00:22:32.921 "dif_insert_or_strip": false, 00:22:32.921 "zcopy": false, 00:22:32.921 "c2h_success": false, 00:22:32.921 "sock_priority": 0, 00:22:32.921 "abort_timeout_sec": 1, 00:22:32.921 "ack_timeout": 0, 00:22:32.921 "data_wr_pool_size": 0 00:22:32.921 } 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "method": "nvmf_create_subsystem", 00:22:32.921 "params": { 00:22:32.921 "nqn": "nqn.2016-06.io.spdk:cnode1", 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.921 00:22:32.921 "allow_any_host": false, 00:22:32.921 "serial_number": "00000000000000000000", 00:22:32.921 "model_number": "SPDK bdev Controller", 00:22:32.921 "max_namespaces": 32, 00:22:32.921 "min_cntlid": 1, 00:22:32.921 "max_cntlid": 65519, 00:22:32.921 "ana_reporting": false 00:22:32.921 } 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "method": "nvmf_subsystem_add_host", 00:22:32.921 "params": { 00:22:32.921 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.921 "host": "nqn.2016-06.io.spdk:host1", 00:22:32.921 "psk": "key0" 00:22:32.921 } 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "method": "nvmf_subsystem_add_ns", 00:22:32.921 "params": { 00:22:32.921 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.921 "namespace": { 00:22:32.921 "nsid": 1, 00:22:32.921 "bdev_name": "malloc0", 00:22:32.921 "nguid": "F1CE5BC504E64A0E8F81300B7C51B767", 00:22:32.921 "uuid": "f1ce5bc5-04e6-4a0e-8f81-300b7c51b767", 00:22:32.921 "no_auto_visible": false 00:22:32.921 } 00:22:32.921 } 00:22:32.921 }, 00:22:32.921 { 00:22:32.921 "method": "nvmf_subsystem_add_listener", 00:22:32.921 "params": { 00:22:32.921 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.921 "listen_address": { 00:22:32.921 "trtype": "TCP", 00:22:32.921 "adrfam": "IPv4", 00:22:32.921 "traddr": "10.0.0.2", 00:22:32.921 "trsvcid": "4420" 00:22:32.921 }, 00:22:32.921 "secure_channel": false, 00:22:32.921 "sock_impl": "ssl" 00:22:32.921 } 00:22:32.921 } 00:22:32.921 ] 00:22:32.921 } 00:22:32.921 ] 00:22:32.921 }' 00:22:32.921 15:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=749899 00:22:32.921 15:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 749899 00:22:32.921 15:27:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:32.921 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 749899 ']' 00:22:32.921 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.921 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:32.921 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.921 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:32.921 15:27:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.183 [2024-07-15 15:27:42.541180] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:33.183 [2024-07-15 15:27:42.541240] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.183 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.183 [2024-07-15 15:27:42.610030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.183 [2024-07-15 15:27:42.674940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.183 [2024-07-15 15:27:42.674976] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.183 [2024-07-15 15:27:42.674983] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.183 [2024-07-15 15:27:42.674990] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.183 [2024-07-15 15:27:42.674995] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.183 [2024-07-15 15:27:42.675046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.443 [2024-07-15 15:27:42.872118] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.443 [2024-07-15 15:27:42.904122] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:33.443 [2024-07-15 15:27:42.914217] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.704 15:27:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.704 15:27:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:33.704 15:27:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.704 15:27:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:33.704 15:27:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.964 15:27:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.964 15:27:43 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=749998 00:22:33.964 15:27:43 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 749998 /var/tmp/bdevperf.sock 00:22:33.964 15:27:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 749998 ']' 00:22:33.964 15:27:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.964 15:27:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.964 15:27:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.964 15:27:43 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:33.964 15:27:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.964 15:27:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.964 15:27:43 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:33.964 "subsystems": [ 00:22:33.964 { 00:22:33.964 "subsystem": "keyring", 00:22:33.964 "config": [ 00:22:33.964 { 00:22:33.964 "method": "keyring_file_add_key", 00:22:33.964 "params": { 00:22:33.964 "name": "key0", 00:22:33.964 "path": "/tmp/tmp.EfKRaa8e48" 00:22:33.964 } 00:22:33.964 } 00:22:33.964 ] 00:22:33.964 }, 00:22:33.964 { 00:22:33.964 "subsystem": "iobuf", 00:22:33.964 "config": [ 00:22:33.964 { 00:22:33.964 "method": "iobuf_set_options", 00:22:33.964 "params": { 00:22:33.964 "small_pool_count": 8192, 00:22:33.964 "large_pool_count": 1024, 00:22:33.964 "small_bufsize": 8192, 00:22:33.964 "large_bufsize": 135168 00:22:33.964 } 00:22:33.964 } 00:22:33.964 ] 00:22:33.964 }, 00:22:33.964 { 00:22:33.964 "subsystem": "sock", 00:22:33.964 "config": [ 00:22:33.964 { 00:22:33.964 "method": "sock_set_default_impl", 00:22:33.964 "params": { 00:22:33.964 "impl_name": "posix" 00:22:33.964 } 00:22:33.964 }, 00:22:33.964 { 00:22:33.964 "method": "sock_impl_set_options", 00:22:33.964 "params": { 00:22:33.964 "impl_name": "ssl", 00:22:33.964 "recv_buf_size": 4096, 00:22:33.964 "send_buf_size": 4096, 00:22:33.964 "enable_recv_pipe": true, 00:22:33.964 "enable_quickack": false, 00:22:33.964 "enable_placement_id": 0, 00:22:33.964 "enable_zerocopy_send_server": true, 00:22:33.964 "enable_zerocopy_send_client": false, 00:22:33.964 "zerocopy_threshold": 0, 00:22:33.964 "tls_version": 0, 00:22:33.964 "enable_ktls": false 00:22:33.964 } 00:22:33.964 }, 00:22:33.964 { 00:22:33.964 "method": "sock_impl_set_options", 00:22:33.964 "params": { 00:22:33.964 "impl_name": "posix", 00:22:33.964 "recv_buf_size": 2097152, 00:22:33.964 "send_buf_size": 2097152, 00:22:33.964 "enable_recv_pipe": true, 00:22:33.964 "enable_quickack": false, 00:22:33.964 "enable_placement_id": 0, 00:22:33.964 "enable_zerocopy_send_server": true, 00:22:33.964 "enable_zerocopy_send_client": false, 00:22:33.964 "zerocopy_threshold": 0, 00:22:33.964 "tls_version": 0, 00:22:33.964 "enable_ktls": false 00:22:33.964 } 00:22:33.964 } 00:22:33.964 ] 00:22:33.964 }, 00:22:33.964 { 00:22:33.964 "subsystem": "vmd", 00:22:33.964 "config": [] 00:22:33.964 }, 00:22:33.964 { 00:22:33.964 "subsystem": "accel", 00:22:33.964 "config": [ 00:22:33.964 { 00:22:33.964 "method": "accel_set_options", 00:22:33.964 "params": { 00:22:33.964 "small_cache_size": 128, 00:22:33.964 "large_cache_size": 16, 00:22:33.964 "task_count": 2048, 00:22:33.964 "sequence_count": 2048, 00:22:33.964 "buf_count": 2048 00:22:33.964 } 00:22:33.964 } 00:22:33.964 ] 00:22:33.964 }, 00:22:33.964 { 00:22:33.964 "subsystem": "bdev", 00:22:33.964 "config": [ 00:22:33.964 { 00:22:33.964 "method": "bdev_set_options", 00:22:33.964 "params": { 00:22:33.964 "bdev_io_pool_size": 65535, 00:22:33.964 "bdev_io_cache_size": 256, 00:22:33.964 "bdev_auto_examine": true, 00:22:33.964 "iobuf_small_cache_size": 128, 00:22:33.964 "iobuf_large_cache_size": 16 00:22:33.964 } 00:22:33.964 }, 00:22:33.964 { 00:22:33.964 "method": "bdev_raid_set_options", 00:22:33.964 "params": { 00:22:33.964 "process_window_size_kb": 1024 00:22:33.964 } 00:22:33.964 }, 00:22:33.964 { 00:22:33.964 "method": "bdev_iscsi_set_options", 00:22:33.964 "params": { 00:22:33.964 "timeout_sec": 30 00:22:33.964 } 00:22:33.964 }, 00:22:33.964 { 00:22:33.964 "method": "bdev_nvme_set_options", 00:22:33.964 "params": { 00:22:33.964 "action_on_timeout": "none", 00:22:33.964 "timeout_us": 0, 00:22:33.964 "timeout_admin_us": 0, 00:22:33.964 "keep_alive_timeout_ms": 10000, 00:22:33.964 "arbitration_burst": 0, 00:22:33.964 "low_priority_weight": 0, 00:22:33.964 "medium_priority_weight": 0, 00:22:33.964 "high_priority_weight": 0, 00:22:33.964 "nvme_adminq_poll_period_us": 10000, 00:22:33.964 "nvme_ioq_poll_period_us": 0, 00:22:33.964 "io_queue_requests": 512, 00:22:33.964 "delay_cmd_submit": true, 00:22:33.964 "transport_retry_count": 4, 00:22:33.964 "bdev_retry_count": 3, 00:22:33.964 "transport_ack_timeout": 0, 00:22:33.964 "ctrlr_loss_timeout_sec": 0, 00:22:33.964 "reconnect_delay_sec": 0, 00:22:33.964 "fast_io_fail_timeout_sec": 0, 00:22:33.964 "disable_auto_failback": false, 00:22:33.964 "generate_uuids": false, 00:22:33.964 "transport_tos": 0, 00:22:33.964 "nvme_error_stat": false, 00:22:33.964 "rdma_srq_size": 0, 00:22:33.964 "io_path_stat": false, 00:22:33.964 "allow_accel_sequence": false, 00:22:33.964 "rdma_max_cq_size": 0, 00:22:33.964 "rdma_cm_event_timeout_ms": 0, 00:22:33.964 "dhchap_digests": [ 00:22:33.964 "sha256", 00:22:33.964 "sha384", 00:22:33.964 "sha512" 00:22:33.964 ], 00:22:33.964 "dhchap_dhgroups": [ 00:22:33.964 "null", 00:22:33.964 "ffdhe2048", 00:22:33.964 "ffdhe3072", 00:22:33.964 "ffdhe4096", 00:22:33.964 "ffdhe6144", 00:22:33.964 "ffdhe8192" 00:22:33.964 ] 00:22:33.964 } 00:22:33.964 }, 00:22:33.964 { 00:22:33.964 "method": "bdev_nvme_attach_controller", 00:22:33.964 "params": { 00:22:33.964 "name": "nvme0", 00:22:33.964 "trtype": "TCP", 00:22:33.964 "adrfam": "IPv4", 00:22:33.964 "traddr": "10.0.0.2", 00:22:33.964 "trsvcid": "4420", 00:22:33.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.964 "prchk_reftag": false, 00:22:33.964 "prchk_guard": false, 00:22:33.964 "ctrlr_loss_timeout_sec": 0, 00:22:33.964 "reconnect_delay_sec": 0, 00:22:33.964 "fast_io_fail_timeout_sec": 0, 00:22:33.964 "psk": "key0", 00:22:33.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.964 "hdgst": false, 00:22:33.964 "ddgst": false 00:22:33.964 } 00:22:33.964 }, 00:22:33.964 { 00:22:33.964 "method": "bdev_nvme_set_hotplug", 00:22:33.964 "params": { 00:22:33.964 "period_us": 100000, 00:22:33.964 "enable": false 00:22:33.964 } 00:22:33.964 }, 00:22:33.964 { 00:22:33.964 "method": "bdev_enable_histogram", 00:22:33.964 "params": { 00:22:33.964 "name": "nvme0n1", 00:22:33.964 "enable": true 00:22:33.964 } 00:22:33.964 }, 00:22:33.964 { 00:22:33.964 "method": "bdev_wait_for_examine" 00:22:33.964 } 00:22:33.964 ] 00:22:33.964 }, 00:22:33.964 { 00:22:33.964 "subsystem": "nbd", 00:22:33.964 "config": [] 00:22:33.964 } 00:22:33.964 ] 00:22:33.964 }' 00:22:33.964 [2024-07-15 15:27:43.398680] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:33.964 [2024-07-15 15:27:43.398731] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749998 ] 00:22:33.964 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.964 [2024-07-15 15:27:43.460210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.964 [2024-07-15 15:27:43.523895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.225 [2024-07-15 15:27:43.662591] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:34.797 15:27:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.797 15:27:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:34.797 15:27:44 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:34.797 15:27:44 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:22:34.797 15:27:44 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.797 15:27:44 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:34.797 Running I/O for 1 seconds... 00:22:36.181 00:22:36.181 Latency(us) 00:22:36.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.181 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:36.181 Verification LBA range: start 0x0 length 0x2000 00:22:36.181 nvme0n1 : 1.05 3789.15 14.80 0.00 0.00 33052.35 7809.71 51773.44 00:22:36.181 =================================================================================================================== 00:22:36.181 Total : 3789.15 14.80 0.00 0.00 33052.35 7809.71 51773.44 00:22:36.181 0 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:36.181 nvmf_trace.0 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 749998 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 749998 ']' 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 749998 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 749998 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:36.181 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 749998' 00:22:36.181 killing process with pid 749998 00:22:36.182 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 749998 00:22:36.182 Received shutdown signal, test time was about 1.000000 seconds 00:22:36.182 00:22:36.182 Latency(us) 00:22:36.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.182 =================================================================================================================== 00:22:36.182 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.182 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 749998 00:22:36.182 15:27:45 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:36.182 15:27:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:36.182 15:27:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:36.182 15:27:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:36.182 15:27:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:36.182 15:27:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:36.182 15:27:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:36.182 rmmod nvme_tcp 00:22:36.182 rmmod nvme_fabrics 00:22:36.182 rmmod nvme_keyring 00:22:36.443 15:27:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:36.443 15:27:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:36.443 15:27:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:36.443 15:27:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 749899 ']' 00:22:36.443 15:27:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 749899 00:22:36.443 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 749899 ']' 00:22:36.443 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 749899 00:22:36.443 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:36.443 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:36.443 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 749899 00:22:36.443 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:36.443 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:36.443 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 749899' 00:22:36.443 killing process with pid 749899 00:22:36.443 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 749899 00:22:36.443 15:27:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 749899 00:22:36.443 15:27:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:36.443 15:27:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:36.443 15:27:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:36.443 15:27:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:36.443 15:27:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:36.443 15:27:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.443 15:27:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.443 15:27:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.988 15:27:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:38.988 15:27:48 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ueqrLfRTRa /tmp/tmp.ahcPf5Me31 /tmp/tmp.EfKRaa8e48 00:22:38.988 00:22:38.988 real 1m23.550s 00:22:38.988 user 2m9.312s 00:22:38.988 sys 0m25.910s 00:22:38.988 15:27:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:38.988 15:27:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.988 ************************************ 00:22:38.988 END TEST nvmf_tls 00:22:38.988 ************************************ 00:22:38.988 15:27:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:38.988 15:27:48 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:38.988 15:27:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:38.988 15:27:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:38.988 15:27:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:38.988 ************************************ 00:22:38.988 START TEST nvmf_fips 00:22:38.988 ************************************ 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:38.988 * Looking for test storage... 00:22:38.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.988 15:27:48 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:22:38.989 Error setting digest 00:22:38.989 0022EEE2977F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:38.989 0022EEE2977F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.989 15:27:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:47.204 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:47.204 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:47.204 Found net devices under 0000:31:00.0: cvl_0_0 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:47.204 Found net devices under 0000:31:00.1: cvl_0_1 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.204 15:27:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.204 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.204 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.204 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:47.204 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.204 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.204 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.204 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:47.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:22:47.204 00:22:47.204 --- 10.0.0.2 ping statistics --- 00:22:47.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.204 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:22:47.204 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:22:47.205 00:22:47.205 --- 10.0.0.1 ping statistics --- 00:22:47.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.205 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=755049 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 755049 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 755049 ']' 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.205 15:27:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:47.205 [2024-07-15 15:27:56.398100] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:47.205 [2024-07-15 15:27:56.398171] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.205 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.205 [2024-07-15 15:27:56.473910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.205 [2024-07-15 15:27:56.545774] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.205 [2024-07-15 15:27:56.545811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.205 [2024-07-15 15:27:56.545819] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.205 [2024-07-15 15:27:56.545825] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.205 [2024-07-15 15:27:56.545831] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.205 [2024-07-15 15:27:56.545849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:47.803 [2024-07-15 15:27:57.328648] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.803 [2024-07-15 15:27:57.344640] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:47.803 [2024-07-15 15:27:57.344782] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.803 [2024-07-15 15:27:57.371401] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:47.803 malloc0 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=755278 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 755278 /var/tmp/bdevperf.sock 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 755278 ']' 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.803 15:27:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:48.063 [2024-07-15 15:27:57.463316] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:22:48.063 [2024-07-15 15:27:57.463374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755278 ] 00:22:48.063 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.063 [2024-07-15 15:27:57.517440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.063 [2024-07-15 15:27:57.569339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.635 15:27:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.635 15:27:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:48.635 15:27:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:48.896 [2024-07-15 15:27:58.350153] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:48.896 [2024-07-15 15:27:58.350216] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:48.896 TLSTESTn1 00:22:48.896 15:27:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:49.156 Running I/O for 10 seconds... 00:22:59.148 00:22:59.148 Latency(us) 00:22:59.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.148 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:59.148 Verification LBA range: start 0x0 length 0x2000 00:22:59.148 TLSTESTn1 : 10.02 4432.76 17.32 0.00 0.00 28831.52 4997.12 69905.07 00:22:59.148 =================================================================================================================== 00:22:59.148 Total : 4432.76 17.32 0.00 0.00 28831.52 4997.12 69905.07 00:22:59.148 0 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:59.148 nvmf_trace.0 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 755278 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 755278 ']' 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 755278 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 755278 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 755278' 00:22:59.148 killing process with pid 755278 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 755278 00:22:59.148 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.148 00:22:59.148 Latency(us) 00:22:59.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.148 =================================================================================================================== 00:22:59.148 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.148 [2024-07-15 15:28:08.748005] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:59.148 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 755278 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:59.408 rmmod nvme_tcp 00:22:59.408 rmmod nvme_fabrics 00:22:59.408 rmmod nvme_keyring 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 755049 ']' 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 755049 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 755049 ']' 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 755049 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 755049 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 755049' 00:22:59.408 killing process with pid 755049 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 755049 00:22:59.408 [2024-07-15 15:28:08.979948] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:59.408 15:28:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 755049 00:22:59.668 15:28:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:59.668 15:28:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:59.668 15:28:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:59.668 15:28:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:59.668 15:28:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:59.668 15:28:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.668 15:28:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.668 15:28:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.577 15:28:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:01.577 15:28:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:01.577 00:23:01.577 real 0m23.029s 00:23:01.577 user 0m24.348s 00:23:01.577 sys 0m9.315s 00:23:01.577 15:28:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:01.577 15:28:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:01.577 ************************************ 00:23:01.577 END TEST nvmf_fips 00:23:01.577 ************************************ 00:23:01.837 15:28:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:01.837 15:28:11 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:23:01.837 15:28:11 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:23:01.837 15:28:11 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:23:01.837 15:28:11 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:23:01.837 15:28:11 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:23:01.837 15:28:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:09.975 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:09.975 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.975 15:28:18 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:09.975 Found net devices under 0000:31:00.0: cvl_0_0 00:23:09.976 15:28:18 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.976 15:28:18 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:09.976 15:28:18 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.976 15:28:18 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:09.976 15:28:18 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.976 15:28:18 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:09.976 15:28:18 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:09.976 15:28:18 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.976 15:28:18 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:09.976 Found net devices under 0000:31:00.1: cvl_0_1 00:23:09.976 15:28:18 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.976 15:28:18 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:09.976 15:28:18 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.976 15:28:18 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:23:09.976 15:28:18 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:09.976 15:28:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:09.976 15:28:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:09.976 15:28:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:09.976 ************************************ 00:23:09.976 START TEST nvmf_perf_adq 00:23:09.976 ************************************ 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:09.976 * Looking for test storage... 00:23:09.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:09.976 15:28:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:16.556 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:16.556 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:16.556 Found net devices under 0000:31:00.0: cvl_0_0 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:16.556 Found net devices under 0000:31:00.1: cvl_0_1 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:23:16.556 15:28:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:18.502 15:28:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:19.885 15:28:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:25.168 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:25.168 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:25.168 Found net devices under 0000:31:00.0: cvl_0_0 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:25.168 Found net devices under 0000:31:00.1: cvl_0_1 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:25.168 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.428 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.428 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.428 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:25.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:23:25.428 00:23:25.428 --- 10.0.0.2 ping statistics --- 00:23:25.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.429 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:23:25.429 00:23:25.429 --- 10.0.0.1 ping statistics --- 00:23:25.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.429 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=767811 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 767811 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 767811 ']' 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.429 15:28:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:25.429 [2024-07-15 15:28:34.926589] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:23:25.429 [2024-07-15 15:28:34.926656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.429 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.429 [2024-07-15 15:28:35.002973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.688 [2024-07-15 15:28:35.080010] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.688 [2024-07-15 15:28:35.080050] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.688 [2024-07-15 15:28:35.080058] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.688 [2024-07-15 15:28:35.080064] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.689 [2024-07-15 15:28:35.080070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.689 [2024-07-15 15:28:35.080112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.689 [2024-07-15 15:28:35.080230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.689 [2024-07-15 15:28:35.080383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.689 [2024-07-15 15:28:35.080384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.258 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:26.258 [2024-07-15 15:28:35.873915] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.518 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.518 15:28:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:26.518 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.518 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:26.518 Malloc1 00:23:26.518 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.518 15:28:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:26.518 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.519 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:26.519 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.519 15:28:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:26.519 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.519 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:26.519 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.519 15:28:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:26.519 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.519 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:26.519 [2024-07-15 15:28:35.933297] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.519 15:28:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.519 15:28:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=767965 00:23:26.519 15:28:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:23:26.519 15:28:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:26.519 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.428 15:28:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:23:28.428 15:28:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.428 15:28:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.428 15:28:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.428 15:28:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:23:28.428 "tick_rate": 2400000000, 00:23:28.428 "poll_groups": [ 00:23:28.428 { 00:23:28.428 "name": "nvmf_tgt_poll_group_000", 00:23:28.428 "admin_qpairs": 1, 00:23:28.428 "io_qpairs": 1, 00:23:28.428 "current_admin_qpairs": 1, 00:23:28.428 "current_io_qpairs": 1, 00:23:28.428 "pending_bdev_io": 0, 00:23:28.428 "completed_nvme_io": 20744, 00:23:28.428 "transports": [ 00:23:28.428 { 00:23:28.428 "trtype": "TCP" 00:23:28.428 } 00:23:28.428 ] 00:23:28.428 }, 00:23:28.428 { 00:23:28.428 "name": "nvmf_tgt_poll_group_001", 00:23:28.428 "admin_qpairs": 0, 00:23:28.428 "io_qpairs": 1, 00:23:28.428 "current_admin_qpairs": 0, 00:23:28.428 "current_io_qpairs": 1, 00:23:28.428 "pending_bdev_io": 0, 00:23:28.428 "completed_nvme_io": 28633, 00:23:28.428 "transports": [ 00:23:28.428 { 00:23:28.428 "trtype": "TCP" 00:23:28.428 } 00:23:28.428 ] 00:23:28.428 }, 00:23:28.428 { 00:23:28.428 "name": "nvmf_tgt_poll_group_002", 00:23:28.428 "admin_qpairs": 0, 00:23:28.428 "io_qpairs": 1, 00:23:28.428 "current_admin_qpairs": 0, 00:23:28.428 "current_io_qpairs": 1, 00:23:28.428 "pending_bdev_io": 0, 00:23:28.428 "completed_nvme_io": 23088, 00:23:28.428 "transports": [ 00:23:28.428 { 00:23:28.428 "trtype": "TCP" 00:23:28.428 } 00:23:28.428 ] 00:23:28.428 }, 00:23:28.428 { 00:23:28.428 "name": "nvmf_tgt_poll_group_003", 00:23:28.428 "admin_qpairs": 0, 00:23:28.428 "io_qpairs": 1, 00:23:28.428 "current_admin_qpairs": 0, 00:23:28.428 "current_io_qpairs": 1, 00:23:28.428 "pending_bdev_io": 0, 00:23:28.428 "completed_nvme_io": 21324, 00:23:28.428 "transports": [ 00:23:28.428 { 00:23:28.428 "trtype": "TCP" 00:23:28.428 } 00:23:28.428 ] 00:23:28.428 } 00:23:28.428 ] 00:23:28.428 }' 00:23:28.428 15:28:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:28.428 15:28:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:23:28.428 15:28:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:23:28.428 15:28:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:23:28.428 15:28:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 767965 00:23:36.568 Initializing NVMe Controllers 00:23:36.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:36.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:36.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:36.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:36.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:36.568 Initialization complete. Launching workers. 00:23:36.568 ======================================================== 00:23:36.568 Latency(us) 00:23:36.568 Device Information : IOPS MiB/s Average min max 00:23:36.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11392.30 44.50 5618.11 1517.90 10368.41 00:23:36.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15389.30 60.11 4159.45 972.74 9983.34 00:23:36.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11137.40 43.51 5747.49 1309.73 10556.55 00:23:36.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 15362.40 60.01 4165.43 1371.52 10771.34 00:23:36.568 ======================================================== 00:23:36.568 Total : 53281.40 208.13 4805.00 972.74 10771.34 00:23:36.568 00:23:36.568 15:28:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:23:36.568 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:36.568 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:36.568 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.568 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:36.568 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.568 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.568 rmmod nvme_tcp 00:23:36.568 rmmod nvme_fabrics 00:23:36.568 rmmod nvme_keyring 00:23:36.568 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.568 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:36.568 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:36.568 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 767811 ']' 00:23:36.568 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 767811 00:23:36.568 15:28:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 767811 ']' 00:23:36.568 15:28:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 767811 00:23:36.568 15:28:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:36.568 15:28:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.828 15:28:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 767811 00:23:36.828 15:28:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:36.828 15:28:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:36.828 15:28:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 767811' 00:23:36.828 killing process with pid 767811 00:23:36.828 15:28:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 767811 00:23:36.828 15:28:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 767811 00:23:36.828 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.828 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.828 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.828 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.828 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.828 15:28:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.828 15:28:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:36.828 15:28:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.369 15:28:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.369 15:28:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:23:39.369 15:28:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:40.310 15:28:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:42.222 15:28:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:47.588 15:28:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:47.588 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:47.588 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.588 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:47.588 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:47.588 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:47.588 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.588 15:28:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.588 15:28:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:47.589 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:47.589 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:47.589 Found net devices under 0000:31:00.0: cvl_0_0 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:47.589 Found net devices under 0000:31:00.1: cvl_0_1 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:47.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:23:47.589 00:23:47.589 --- 10.0.0.2 ping statistics --- 00:23:47.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.589 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:47.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:23:47.589 00:23:47.589 --- 10.0.0.1 ping statistics --- 00:23:47.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.589 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:47.589 15:28:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:47.589 15:28:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:47.589 15:28:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:47.589 net.core.busy_poll = 1 00:23:47.589 15:28:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:47.589 net.core.busy_read = 1 00:23:47.589 15:28:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:47.589 15:28:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:47.589 15:28:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:47.878 15:28:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:47.878 15:28:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:47.878 15:28:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:47.878 15:28:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:47.878 15:28:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:47.878 15:28:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:47.878 15:28:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=772574 00:23:47.878 15:28:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 772574 00:23:47.878 15:28:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 772574 ']' 00:23:47.878 15:28:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:47.878 15:28:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.878 15:28:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:47.878 15:28:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.878 15:28:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:47.878 15:28:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:47.878 [2024-07-15 15:28:57.351819] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:23:47.878 [2024-07-15 15:28:57.351907] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.878 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.878 [2024-07-15 15:28:57.427450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:48.139 [2024-07-15 15:28:57.503099] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.139 [2024-07-15 15:28:57.503136] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.139 [2024-07-15 15:28:57.503144] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.139 [2024-07-15 15:28:57.503151] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.139 [2024-07-15 15:28:57.503157] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.139 [2024-07-15 15:28:57.503300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.139 [2024-07-15 15:28:57.503434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.139 [2024-07-15 15:28:57.503589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.139 [2024-07-15 15:28:57.503591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:48.711 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:48.711 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:23:48.711 15:28:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:48.711 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:48.711 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:48.711 15:28:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.711 15:28:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:48.711 15:28:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:48.711 15:28:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:48.711 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.711 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:48.712 [2024-07-15 15:28:58.298228] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:48.712 Malloc1 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.712 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:48.972 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.972 15:28:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:48.972 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.972 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:48.972 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.972 15:28:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:48.972 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.972 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:48.972 [2024-07-15 15:28:58.357645] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.972 15:28:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.972 15:28:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=772774 00:23:48.972 15:28:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:48.972 15:28:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:48.972 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.880 15:29:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:50.880 15:29:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.881 15:29:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:50.881 15:29:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.881 15:29:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:50.881 "tick_rate": 2400000000, 00:23:50.881 "poll_groups": [ 00:23:50.881 { 00:23:50.881 "name": "nvmf_tgt_poll_group_000", 00:23:50.881 "admin_qpairs": 1, 00:23:50.881 "io_qpairs": 2, 00:23:50.881 "current_admin_qpairs": 1, 00:23:50.881 "current_io_qpairs": 2, 00:23:50.881 "pending_bdev_io": 0, 00:23:50.881 "completed_nvme_io": 28975, 00:23:50.881 "transports": [ 00:23:50.881 { 00:23:50.881 "trtype": "TCP" 00:23:50.881 } 00:23:50.881 ] 00:23:50.881 }, 00:23:50.881 { 00:23:50.881 "name": "nvmf_tgt_poll_group_001", 00:23:50.881 "admin_qpairs": 0, 00:23:50.881 "io_qpairs": 2, 00:23:50.881 "current_admin_qpairs": 0, 00:23:50.881 "current_io_qpairs": 2, 00:23:50.881 "pending_bdev_io": 0, 00:23:50.881 "completed_nvme_io": 41498, 00:23:50.881 "transports": [ 00:23:50.881 { 00:23:50.881 "trtype": "TCP" 00:23:50.881 } 00:23:50.881 ] 00:23:50.881 }, 00:23:50.881 { 00:23:50.881 "name": "nvmf_tgt_poll_group_002", 00:23:50.881 "admin_qpairs": 0, 00:23:50.881 "io_qpairs": 0, 00:23:50.881 "current_admin_qpairs": 0, 00:23:50.881 "current_io_qpairs": 0, 00:23:50.881 "pending_bdev_io": 0, 00:23:50.881 "completed_nvme_io": 0, 00:23:50.881 "transports": [ 00:23:50.881 { 00:23:50.881 "trtype": "TCP" 00:23:50.881 } 00:23:50.881 ] 00:23:50.881 }, 00:23:50.881 { 00:23:50.881 "name": "nvmf_tgt_poll_group_003", 00:23:50.881 "admin_qpairs": 0, 00:23:50.881 "io_qpairs": 0, 00:23:50.881 "current_admin_qpairs": 0, 00:23:50.881 "current_io_qpairs": 0, 00:23:50.881 "pending_bdev_io": 0, 00:23:50.881 "completed_nvme_io": 0, 00:23:50.881 "transports": [ 00:23:50.881 { 00:23:50.881 "trtype": "TCP" 00:23:50.881 } 00:23:50.881 ] 00:23:50.881 } 00:23:50.881 ] 00:23:50.881 }' 00:23:50.881 15:29:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:50.881 15:29:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:50.881 15:29:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:50.881 15:29:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:50.881 15:29:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 772774 00:23:59.090 Initializing NVMe Controllers 00:23:59.090 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:59.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:59.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:59.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:59.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:59.090 Initialization complete. Launching workers. 00:23:59.090 ======================================================== 00:23:59.090 Latency(us) 00:23:59.090 Device Information : IOPS MiB/s Average min max 00:23:59.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9552.20 37.31 6700.03 1580.63 51964.47 00:23:59.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9918.00 38.74 6453.26 1370.85 50414.62 00:23:59.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12103.70 47.28 5303.70 1329.49 49208.39 00:23:59.090 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5877.70 22.96 10923.29 1529.33 53921.05 00:23:59.090 ======================================================== 00:23:59.090 Total : 37451.59 146.30 6846.21 1329.49 53921.05 00:23:59.090 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.090 rmmod nvme_tcp 00:23:59.090 rmmod nvme_fabrics 00:23:59.090 rmmod nvme_keyring 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 772574 ']' 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 772574 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 772574 ']' 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 772574 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 772574 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 772574' 00:23:59.090 killing process with pid 772574 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 772574 00:23:59.090 15:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 772574 00:23:59.351 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:59.351 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:59.351 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:59.351 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.351 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.351 15:29:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.351 15:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.351 15:29:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.668 15:29:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:02.668 15:29:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:24:02.668 00:24:02.668 real 0m53.204s 00:24:02.668 user 2m49.846s 00:24:02.668 sys 0m10.671s 00:24:02.668 15:29:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:02.668 15:29:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:02.668 ************************************ 00:24:02.668 END TEST nvmf_perf_adq 00:24:02.668 ************************************ 00:24:02.668 15:29:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:02.668 15:29:11 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:02.668 15:29:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:02.668 15:29:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.668 15:29:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:02.668 ************************************ 00:24:02.668 START TEST nvmf_shutdown 00:24:02.668 ************************************ 00:24:02.668 15:29:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:02.668 * Looking for test storage... 00:24:02.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:02.668 ************************************ 00:24:02.668 START TEST nvmf_shutdown_tc1 00:24:02.668 ************************************ 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:02.668 15:29:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:10.799 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:10.799 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:10.799 Found net devices under 0000:31:00.0: cvl_0_0 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:10.799 Found net devices under 0000:31:00.1: cvl_0_1 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:10.799 15:29:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:10.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.737 ms 00:24:10.799 00:24:10.799 --- 10.0.0.2 ping statistics --- 00:24:10.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.799 rtt min/avg/max/mdev = 0.737/0.737/0.737/0.000 ms 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:24:10.799 00:24:10.799 --- 10.0.0.1 ping statistics --- 00:24:10.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.799 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=779648 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 779648 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 779648 ']' 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.799 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:10.799 [2024-07-15 15:29:20.161511] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:10.799 [2024-07-15 15:29:20.161573] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.799 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.799 [2024-07-15 15:29:20.240442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:10.799 [2024-07-15 15:29:20.314145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.799 [2024-07-15 15:29:20.314187] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.799 [2024-07-15 15:29:20.314195] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.799 [2024-07-15 15:29:20.314201] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.799 [2024-07-15 15:29:20.314207] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.799 [2024-07-15 15:29:20.314315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.799 [2024-07-15 15:29:20.314475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.799 [2024-07-15 15:29:20.314834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:10.799 [2024-07-15 15:29:20.314835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.370 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:11.370 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:24:11.370 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:11.370 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:11.370 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:11.370 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.370 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:11.370 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.370 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:11.370 [2024-07-15 15:29:20.986507] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.630 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.630 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:11.630 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:11.630 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:11.630 15:29:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.630 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:11.630 Malloc1 00:24:11.630 [2024-07-15 15:29:21.089936] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.630 Malloc2 00:24:11.630 Malloc3 00:24:11.630 Malloc4 00:24:11.630 Malloc5 00:24:11.891 Malloc6 00:24:11.891 Malloc7 00:24:11.891 Malloc8 00:24:11.891 Malloc9 00:24:11.891 Malloc10 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=779863 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 779863 /var/tmp/bdevperf.sock 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 779863 ']' 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:11.891 { 00:24:11.891 "params": { 00:24:11.891 "name": "Nvme$subsystem", 00:24:11.891 "trtype": "$TEST_TRANSPORT", 00:24:11.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.891 "adrfam": "ipv4", 00:24:11.891 "trsvcid": "$NVMF_PORT", 00:24:11.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.891 "hdgst": ${hdgst:-false}, 00:24:11.891 "ddgst": ${ddgst:-false} 00:24:11.891 }, 00:24:11.891 "method": "bdev_nvme_attach_controller" 00:24:11.891 } 00:24:11.891 EOF 00:24:11.891 )") 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:11.891 { 00:24:11.891 "params": { 00:24:11.891 "name": "Nvme$subsystem", 00:24:11.891 "trtype": "$TEST_TRANSPORT", 00:24:11.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.891 "adrfam": "ipv4", 00:24:11.891 "trsvcid": "$NVMF_PORT", 00:24:11.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.891 "hdgst": ${hdgst:-false}, 00:24:11.891 "ddgst": ${ddgst:-false} 00:24:11.891 }, 00:24:11.891 "method": "bdev_nvme_attach_controller" 00:24:11.891 } 00:24:11.891 EOF 00:24:11.891 )") 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:11.891 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:11.891 { 00:24:11.891 "params": { 00:24:11.891 "name": "Nvme$subsystem", 00:24:11.891 "trtype": "$TEST_TRANSPORT", 00:24:11.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.891 "adrfam": "ipv4", 00:24:11.891 "trsvcid": "$NVMF_PORT", 00:24:11.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.891 "hdgst": ${hdgst:-false}, 00:24:11.891 "ddgst": ${ddgst:-false} 00:24:11.891 }, 00:24:11.891 "method": "bdev_nvme_attach_controller" 00:24:11.891 } 00:24:11.891 EOF 00:24:11.891 )") 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.153 { 00:24:12.153 "params": { 00:24:12.153 "name": "Nvme$subsystem", 00:24:12.153 "trtype": "$TEST_TRANSPORT", 00:24:12.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.153 "adrfam": "ipv4", 00:24:12.153 "trsvcid": "$NVMF_PORT", 00:24:12.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.153 "hdgst": ${hdgst:-false}, 00:24:12.153 "ddgst": ${ddgst:-false} 00:24:12.153 }, 00:24:12.153 "method": "bdev_nvme_attach_controller" 00:24:12.153 } 00:24:12.153 EOF 00:24:12.153 )") 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.153 { 00:24:12.153 "params": { 00:24:12.153 "name": "Nvme$subsystem", 00:24:12.153 "trtype": "$TEST_TRANSPORT", 00:24:12.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.153 "adrfam": "ipv4", 00:24:12.153 "trsvcid": "$NVMF_PORT", 00:24:12.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.153 "hdgst": ${hdgst:-false}, 00:24:12.153 "ddgst": ${ddgst:-false} 00:24:12.153 }, 00:24:12.153 "method": "bdev_nvme_attach_controller" 00:24:12.153 } 00:24:12.153 EOF 00:24:12.153 )") 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.153 { 00:24:12.153 "params": { 00:24:12.153 "name": "Nvme$subsystem", 00:24:12.153 "trtype": "$TEST_TRANSPORT", 00:24:12.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.153 "adrfam": "ipv4", 00:24:12.153 "trsvcid": "$NVMF_PORT", 00:24:12.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.153 "hdgst": ${hdgst:-false}, 00:24:12.153 "ddgst": ${ddgst:-false} 00:24:12.153 }, 00:24:12.153 "method": "bdev_nvme_attach_controller" 00:24:12.153 } 00:24:12.153 EOF 00:24:12.153 )") 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:12.153 [2024-07-15 15:29:21.537084] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:12.153 [2024-07-15 15:29:21.537135] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.153 { 00:24:12.153 "params": { 00:24:12.153 "name": "Nvme$subsystem", 00:24:12.153 "trtype": "$TEST_TRANSPORT", 00:24:12.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.153 "adrfam": "ipv4", 00:24:12.153 "trsvcid": "$NVMF_PORT", 00:24:12.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.153 "hdgst": ${hdgst:-false}, 00:24:12.153 "ddgst": ${ddgst:-false} 00:24:12.153 }, 00:24:12.153 "method": "bdev_nvme_attach_controller" 00:24:12.153 } 00:24:12.153 EOF 00:24:12.153 )") 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.153 { 00:24:12.153 "params": { 00:24:12.153 "name": "Nvme$subsystem", 00:24:12.153 "trtype": "$TEST_TRANSPORT", 00:24:12.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.153 "adrfam": "ipv4", 00:24:12.153 "trsvcid": "$NVMF_PORT", 00:24:12.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.153 "hdgst": ${hdgst:-false}, 00:24:12.153 "ddgst": ${ddgst:-false} 00:24:12.153 }, 00:24:12.153 "method": "bdev_nvme_attach_controller" 00:24:12.153 } 00:24:12.153 EOF 00:24:12.153 )") 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.153 { 00:24:12.153 "params": { 00:24:12.153 "name": "Nvme$subsystem", 00:24:12.153 "trtype": "$TEST_TRANSPORT", 00:24:12.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.153 "adrfam": "ipv4", 00:24:12.153 "trsvcid": "$NVMF_PORT", 00:24:12.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.153 "hdgst": ${hdgst:-false}, 00:24:12.153 "ddgst": ${ddgst:-false} 00:24:12.153 }, 00:24:12.153 "method": "bdev_nvme_attach_controller" 00:24:12.153 } 00:24:12.153 EOF 00:24:12.153 )") 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:12.153 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:12.153 { 00:24:12.153 "params": { 00:24:12.153 "name": "Nvme$subsystem", 00:24:12.153 "trtype": "$TEST_TRANSPORT", 00:24:12.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.154 "adrfam": "ipv4", 00:24:12.154 "trsvcid": "$NVMF_PORT", 00:24:12.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.154 "hdgst": ${hdgst:-false}, 00:24:12.154 "ddgst": ${ddgst:-false} 00:24:12.154 }, 00:24:12.154 "method": "bdev_nvme_attach_controller" 00:24:12.154 } 00:24:12.154 EOF 00:24:12.154 )") 00:24:12.154 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:12.154 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.154 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:12.154 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:12.154 15:29:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:12.154 "params": { 00:24:12.154 "name": "Nvme1", 00:24:12.154 "trtype": "tcp", 00:24:12.154 "traddr": "10.0.0.2", 00:24:12.154 "adrfam": "ipv4", 00:24:12.154 "trsvcid": "4420", 00:24:12.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:12.154 "hdgst": false, 00:24:12.154 "ddgst": false 00:24:12.154 }, 00:24:12.154 "method": "bdev_nvme_attach_controller" 00:24:12.154 },{ 00:24:12.154 "params": { 00:24:12.154 "name": "Nvme2", 00:24:12.154 "trtype": "tcp", 00:24:12.154 "traddr": "10.0.0.2", 00:24:12.154 "adrfam": "ipv4", 00:24:12.154 "trsvcid": "4420", 00:24:12.154 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:12.154 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:12.154 "hdgst": false, 00:24:12.154 "ddgst": false 00:24:12.154 }, 00:24:12.154 "method": "bdev_nvme_attach_controller" 00:24:12.154 },{ 00:24:12.154 "params": { 00:24:12.154 "name": "Nvme3", 00:24:12.154 "trtype": "tcp", 00:24:12.154 "traddr": "10.0.0.2", 00:24:12.154 "adrfam": "ipv4", 00:24:12.154 "trsvcid": "4420", 00:24:12.154 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:12.154 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:12.154 "hdgst": false, 00:24:12.154 "ddgst": false 00:24:12.154 }, 00:24:12.154 "method": "bdev_nvme_attach_controller" 00:24:12.154 },{ 00:24:12.154 "params": { 00:24:12.154 "name": "Nvme4", 00:24:12.154 "trtype": "tcp", 00:24:12.154 "traddr": "10.0.0.2", 00:24:12.154 "adrfam": "ipv4", 00:24:12.154 "trsvcid": "4420", 00:24:12.154 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:12.154 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:12.154 "hdgst": false, 00:24:12.154 "ddgst": false 00:24:12.154 }, 00:24:12.154 "method": "bdev_nvme_attach_controller" 00:24:12.154 },{ 00:24:12.154 "params": { 00:24:12.154 "name": "Nvme5", 00:24:12.154 "trtype": "tcp", 00:24:12.154 "traddr": "10.0.0.2", 00:24:12.154 "adrfam": "ipv4", 00:24:12.154 "trsvcid": "4420", 00:24:12.154 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:12.154 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:12.154 "hdgst": false, 00:24:12.154 "ddgst": false 00:24:12.154 }, 00:24:12.154 "method": "bdev_nvme_attach_controller" 00:24:12.154 },{ 00:24:12.154 "params": { 00:24:12.154 "name": "Nvme6", 00:24:12.154 "trtype": "tcp", 00:24:12.154 "traddr": "10.0.0.2", 00:24:12.154 "adrfam": "ipv4", 00:24:12.154 "trsvcid": "4420", 00:24:12.154 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:12.154 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:12.154 "hdgst": false, 00:24:12.154 "ddgst": false 00:24:12.154 }, 00:24:12.154 "method": "bdev_nvme_attach_controller" 00:24:12.154 },{ 00:24:12.154 "params": { 00:24:12.154 "name": "Nvme7", 00:24:12.154 "trtype": "tcp", 00:24:12.154 "traddr": "10.0.0.2", 00:24:12.154 "adrfam": "ipv4", 00:24:12.154 "trsvcid": "4420", 00:24:12.154 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:12.154 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:12.154 "hdgst": false, 00:24:12.154 "ddgst": false 00:24:12.154 }, 00:24:12.154 "method": "bdev_nvme_attach_controller" 00:24:12.154 },{ 00:24:12.154 "params": { 00:24:12.154 "name": "Nvme8", 00:24:12.154 "trtype": "tcp", 00:24:12.154 "traddr": "10.0.0.2", 00:24:12.154 "adrfam": "ipv4", 00:24:12.154 "trsvcid": "4420", 00:24:12.154 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:12.154 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:12.154 "hdgst": false, 00:24:12.154 "ddgst": false 00:24:12.154 }, 00:24:12.154 "method": "bdev_nvme_attach_controller" 00:24:12.154 },{ 00:24:12.154 "params": { 00:24:12.154 "name": "Nvme9", 00:24:12.154 "trtype": "tcp", 00:24:12.154 "traddr": "10.0.0.2", 00:24:12.154 "adrfam": "ipv4", 00:24:12.154 "trsvcid": "4420", 00:24:12.154 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:12.154 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:12.154 "hdgst": false, 00:24:12.154 "ddgst": false 00:24:12.154 }, 00:24:12.154 "method": "bdev_nvme_attach_controller" 00:24:12.154 },{ 00:24:12.154 "params": { 00:24:12.154 "name": "Nvme10", 00:24:12.154 "trtype": "tcp", 00:24:12.154 "traddr": "10.0.0.2", 00:24:12.154 "adrfam": "ipv4", 00:24:12.154 "trsvcid": "4420", 00:24:12.154 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:12.154 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:12.154 "hdgst": false, 00:24:12.154 "ddgst": false 00:24:12.154 }, 00:24:12.154 "method": "bdev_nvme_attach_controller" 00:24:12.154 }' 00:24:12.154 [2024-07-15 15:29:21.602121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.154 [2024-07-15 15:29:21.666954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.569 15:29:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.569 15:29:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:24:13.569 15:29:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:13.569 15:29:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.569 15:29:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:13.569 15:29:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.569 15:29:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 779863 00:24:13.569 15:29:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:13.569 15:29:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:24:14.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 779863 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 779648 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:14.950 { 00:24:14.950 "params": { 00:24:14.950 "name": "Nvme$subsystem", 00:24:14.950 "trtype": "$TEST_TRANSPORT", 00:24:14.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.950 "adrfam": "ipv4", 00:24:14.950 "trsvcid": "$NVMF_PORT", 00:24:14.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.950 "hdgst": ${hdgst:-false}, 00:24:14.950 "ddgst": ${ddgst:-false} 00:24:14.950 }, 00:24:14.950 "method": "bdev_nvme_attach_controller" 00:24:14.950 } 00:24:14.950 EOF 00:24:14.950 )") 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:14.950 { 00:24:14.950 "params": { 00:24:14.950 "name": "Nvme$subsystem", 00:24:14.950 "trtype": "$TEST_TRANSPORT", 00:24:14.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.950 "adrfam": "ipv4", 00:24:14.950 "trsvcid": "$NVMF_PORT", 00:24:14.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.950 "hdgst": ${hdgst:-false}, 00:24:14.950 "ddgst": ${ddgst:-false} 00:24:14.950 }, 00:24:14.950 "method": "bdev_nvme_attach_controller" 00:24:14.950 } 00:24:14.950 EOF 00:24:14.950 )") 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:14.950 { 00:24:14.950 "params": { 00:24:14.950 "name": "Nvme$subsystem", 00:24:14.950 "trtype": "$TEST_TRANSPORT", 00:24:14.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.950 "adrfam": "ipv4", 00:24:14.950 "trsvcid": "$NVMF_PORT", 00:24:14.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.950 "hdgst": ${hdgst:-false}, 00:24:14.950 "ddgst": ${ddgst:-false} 00:24:14.950 }, 00:24:14.950 "method": "bdev_nvme_attach_controller" 00:24:14.950 } 00:24:14.950 EOF 00:24:14.950 )") 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:14.950 { 00:24:14.950 "params": { 00:24:14.950 "name": "Nvme$subsystem", 00:24:14.950 "trtype": "$TEST_TRANSPORT", 00:24:14.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.950 "adrfam": "ipv4", 00:24:14.950 "trsvcid": "$NVMF_PORT", 00:24:14.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.950 "hdgst": ${hdgst:-false}, 00:24:14.950 "ddgst": ${ddgst:-false} 00:24:14.950 }, 00:24:14.950 "method": "bdev_nvme_attach_controller" 00:24:14.950 } 00:24:14.950 EOF 00:24:14.950 )") 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:14.950 { 00:24:14.950 "params": { 00:24:14.950 "name": "Nvme$subsystem", 00:24:14.950 "trtype": "$TEST_TRANSPORT", 00:24:14.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.950 "adrfam": "ipv4", 00:24:14.950 "trsvcid": "$NVMF_PORT", 00:24:14.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.950 "hdgst": ${hdgst:-false}, 00:24:14.950 "ddgst": ${ddgst:-false} 00:24:14.950 }, 00:24:14.950 "method": "bdev_nvme_attach_controller" 00:24:14.950 } 00:24:14.950 EOF 00:24:14.950 )") 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:14.950 { 00:24:14.950 "params": { 00:24:14.950 "name": "Nvme$subsystem", 00:24:14.950 "trtype": "$TEST_TRANSPORT", 00:24:14.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.950 "adrfam": "ipv4", 00:24:14.950 "trsvcid": "$NVMF_PORT", 00:24:14.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.950 "hdgst": ${hdgst:-false}, 00:24:14.950 "ddgst": ${ddgst:-false} 00:24:14.950 }, 00:24:14.950 "method": "bdev_nvme_attach_controller" 00:24:14.950 } 00:24:14.950 EOF 00:24:14.950 )") 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:14.950 { 00:24:14.950 "params": { 00:24:14.950 "name": "Nvme$subsystem", 00:24:14.950 "trtype": "$TEST_TRANSPORT", 00:24:14.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.950 "adrfam": "ipv4", 00:24:14.950 "trsvcid": "$NVMF_PORT", 00:24:14.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.950 "hdgst": ${hdgst:-false}, 00:24:14.950 "ddgst": ${ddgst:-false} 00:24:14.950 }, 00:24:14.950 "method": "bdev_nvme_attach_controller" 00:24:14.950 } 00:24:14.950 EOF 00:24:14.950 )") 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:14.950 [2024-07-15 15:29:24.249119] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:14.950 [2024-07-15 15:29:24.249173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780539 ] 00:24:14.950 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:14.950 { 00:24:14.950 "params": { 00:24:14.951 "name": "Nvme$subsystem", 00:24:14.951 "trtype": "$TEST_TRANSPORT", 00:24:14.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.951 "adrfam": "ipv4", 00:24:14.951 "trsvcid": "$NVMF_PORT", 00:24:14.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.951 "hdgst": ${hdgst:-false}, 00:24:14.951 "ddgst": ${ddgst:-false} 00:24:14.951 }, 00:24:14.951 "method": "bdev_nvme_attach_controller" 00:24:14.951 } 00:24:14.951 EOF 00:24:14.951 )") 00:24:14.951 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:14.951 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:14.951 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:14.951 { 00:24:14.951 "params": { 00:24:14.951 "name": "Nvme$subsystem", 00:24:14.951 "trtype": "$TEST_TRANSPORT", 00:24:14.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.951 "adrfam": "ipv4", 00:24:14.951 "trsvcid": "$NVMF_PORT", 00:24:14.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.951 "hdgst": ${hdgst:-false}, 00:24:14.951 "ddgst": ${ddgst:-false} 00:24:14.951 }, 00:24:14.951 "method": "bdev_nvme_attach_controller" 00:24:14.951 } 00:24:14.951 EOF 00:24:14.951 )") 00:24:14.951 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:14.951 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:14.951 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:14.951 { 00:24:14.951 "params": { 00:24:14.951 "name": "Nvme$subsystem", 00:24:14.951 "trtype": "$TEST_TRANSPORT", 00:24:14.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:14.951 "adrfam": "ipv4", 00:24:14.951 "trsvcid": "$NVMF_PORT", 00:24:14.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:14.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:14.951 "hdgst": ${hdgst:-false}, 00:24:14.951 "ddgst": ${ddgst:-false} 00:24:14.951 }, 00:24:14.951 "method": "bdev_nvme_attach_controller" 00:24:14.951 } 00:24:14.951 EOF 00:24:14.951 )") 00:24:14.951 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:24:14.951 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:24:14.951 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:24:14.951 15:29:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:14.951 "params": { 00:24:14.951 "name": "Nvme1", 00:24:14.951 "trtype": "tcp", 00:24:14.951 "traddr": "10.0.0.2", 00:24:14.951 "adrfam": "ipv4", 00:24:14.951 "trsvcid": "4420", 00:24:14.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:14.951 "hdgst": false, 00:24:14.951 "ddgst": false 00:24:14.951 }, 00:24:14.951 "method": "bdev_nvme_attach_controller" 00:24:14.951 },{ 00:24:14.951 "params": { 00:24:14.951 "name": "Nvme2", 00:24:14.951 "trtype": "tcp", 00:24:14.951 "traddr": "10.0.0.2", 00:24:14.951 "adrfam": "ipv4", 00:24:14.951 "trsvcid": "4420", 00:24:14.951 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:14.951 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:14.951 "hdgst": false, 00:24:14.951 "ddgst": false 00:24:14.951 }, 00:24:14.951 "method": "bdev_nvme_attach_controller" 00:24:14.951 },{ 00:24:14.951 "params": { 00:24:14.951 "name": "Nvme3", 00:24:14.951 "trtype": "tcp", 00:24:14.951 "traddr": "10.0.0.2", 00:24:14.951 "adrfam": "ipv4", 00:24:14.951 "trsvcid": "4420", 00:24:14.951 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:14.951 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:14.951 "hdgst": false, 00:24:14.951 "ddgst": false 00:24:14.951 }, 00:24:14.951 "method": "bdev_nvme_attach_controller" 00:24:14.951 },{ 00:24:14.951 "params": { 00:24:14.951 "name": "Nvme4", 00:24:14.951 "trtype": "tcp", 00:24:14.951 "traddr": "10.0.0.2", 00:24:14.951 "adrfam": "ipv4", 00:24:14.951 "trsvcid": "4420", 00:24:14.951 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:14.951 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:14.951 "hdgst": false, 00:24:14.951 "ddgst": false 00:24:14.951 }, 00:24:14.951 "method": "bdev_nvme_attach_controller" 00:24:14.951 },{ 00:24:14.951 "params": { 00:24:14.951 "name": "Nvme5", 00:24:14.951 "trtype": "tcp", 00:24:14.951 "traddr": "10.0.0.2", 00:24:14.951 "adrfam": "ipv4", 00:24:14.951 "trsvcid": "4420", 00:24:14.951 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:14.951 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:14.951 "hdgst": false, 00:24:14.951 "ddgst": false 00:24:14.951 }, 00:24:14.951 "method": "bdev_nvme_attach_controller" 00:24:14.951 },{ 00:24:14.951 "params": { 00:24:14.951 "name": "Nvme6", 00:24:14.951 "trtype": "tcp", 00:24:14.951 "traddr": "10.0.0.2", 00:24:14.951 "adrfam": "ipv4", 00:24:14.951 "trsvcid": "4420", 00:24:14.951 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:14.951 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:14.951 "hdgst": false, 00:24:14.951 "ddgst": false 00:24:14.951 }, 00:24:14.951 "method": "bdev_nvme_attach_controller" 00:24:14.951 },{ 00:24:14.951 "params": { 00:24:14.951 "name": "Nvme7", 00:24:14.951 "trtype": "tcp", 00:24:14.951 "traddr": "10.0.0.2", 00:24:14.951 "adrfam": "ipv4", 00:24:14.951 "trsvcid": "4420", 00:24:14.951 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:14.951 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:14.951 "hdgst": false, 00:24:14.951 "ddgst": false 00:24:14.951 }, 00:24:14.951 "method": "bdev_nvme_attach_controller" 00:24:14.951 },{ 00:24:14.951 "params": { 00:24:14.951 "name": "Nvme8", 00:24:14.951 "trtype": "tcp", 00:24:14.951 "traddr": "10.0.0.2", 00:24:14.951 "adrfam": "ipv4", 00:24:14.951 "trsvcid": "4420", 00:24:14.951 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:14.951 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:14.951 "hdgst": false, 00:24:14.951 "ddgst": false 00:24:14.951 }, 00:24:14.951 "method": "bdev_nvme_attach_controller" 00:24:14.951 },{ 00:24:14.951 "params": { 00:24:14.951 "name": "Nvme9", 00:24:14.951 "trtype": "tcp", 00:24:14.951 "traddr": "10.0.0.2", 00:24:14.951 "adrfam": "ipv4", 00:24:14.951 "trsvcid": "4420", 00:24:14.951 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:14.951 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:14.951 "hdgst": false, 00:24:14.951 "ddgst": false 00:24:14.951 }, 00:24:14.951 "method": "bdev_nvme_attach_controller" 00:24:14.951 },{ 00:24:14.951 "params": { 00:24:14.951 "name": "Nvme10", 00:24:14.951 "trtype": "tcp", 00:24:14.951 "traddr": "10.0.0.2", 00:24:14.951 "adrfam": "ipv4", 00:24:14.951 "trsvcid": "4420", 00:24:14.951 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:14.951 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:14.951 "hdgst": false, 00:24:14.951 "ddgst": false 00:24:14.951 }, 00:24:14.951 "method": "bdev_nvme_attach_controller" 00:24:14.951 }' 00:24:14.951 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.951 [2024-07-15 15:29:24.314534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.951 [2024-07-15 15:29:24.378859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.343 Running I/O for 1 seconds... 00:24:17.283 00:24:17.283 Latency(us) 00:24:17.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.283 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:17.283 Verification LBA range: start 0x0 length 0x400 00:24:17.283 Nvme1n1 : 1.12 228.93 14.31 0.00 0.00 276253.87 19551.57 239424.85 00:24:17.283 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:17.283 Verification LBA range: start 0x0 length 0x400 00:24:17.283 Nvme2n1 : 1.11 230.56 14.41 0.00 0.00 269992.96 19114.67 244667.73 00:24:17.283 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:17.283 Verification LBA range: start 0x0 length 0x400 00:24:17.283 Nvme3n1 : 1.18 271.38 16.96 0.00 0.00 225584.98 16274.77 227191.47 00:24:17.283 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:17.283 Verification LBA range: start 0x0 length 0x400 00:24:17.283 Nvme4n1 : 1.12 227.77 14.24 0.00 0.00 263750.19 20643.84 242920.11 00:24:17.283 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:17.283 Verification LBA range: start 0x0 length 0x400 00:24:17.283 Nvme5n1 : 1.15 222.47 13.90 0.00 0.00 265770.24 22282.24 244667.73 00:24:17.283 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:17.283 Verification LBA range: start 0x0 length 0x400 00:24:17.283 Nvme6n1 : 1.12 228.26 14.27 0.00 0.00 253886.72 26105.17 242920.11 00:24:17.283 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:17.283 Verification LBA range: start 0x0 length 0x400 00:24:17.283 Nvme7n1 : 1.18 271.17 16.95 0.00 0.00 210665.64 14964.05 249910.61 00:24:17.283 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:17.283 Verification LBA range: start 0x0 length 0x400 00:24:17.283 Nvme8n1 : 1.20 267.78 16.74 0.00 0.00 210156.29 11851.09 235929.60 00:24:17.283 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:17.283 Verification LBA range: start 0x0 length 0x400 00:24:17.283 Nvme9n1 : 1.19 273.18 17.07 0.00 0.00 202039.67 5434.03 244667.73 00:24:17.283 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:17.283 Verification LBA range: start 0x0 length 0x400 00:24:17.283 Nvme10n1 : 1.20 266.96 16.69 0.00 0.00 203351.98 11632.64 262144.00 00:24:17.283 =================================================================================================================== 00:24:17.283 Total : 2488.46 155.53 0.00 0.00 235012.18 5434.03 262144.00 00:24:17.543 15:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:24:17.543 15:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:17.543 15:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:17.543 15:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:17.543 15:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:17.543 15:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:17.543 15:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:24:17.543 15:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:17.543 15:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:24:17.543 15:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:17.543 15:29:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:17.543 rmmod nvme_tcp 00:24:17.543 rmmod nvme_fabrics 00:24:17.543 rmmod nvme_keyring 00:24:17.543 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:17.543 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:24:17.543 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:24:17.543 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 779648 ']' 00:24:17.543 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 779648 00:24:17.543 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 779648 ']' 00:24:17.543 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 779648 00:24:17.543 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:24:17.543 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:17.543 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779648 00:24:17.543 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:17.543 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:17.543 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779648' 00:24:17.543 killing process with pid 779648 00:24:17.543 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 779648 00:24:17.543 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 779648 00:24:17.802 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:17.802 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:17.802 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:17.802 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.802 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:17.803 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.803 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.803 15:29:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.343 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:20.343 00:24:20.343 real 0m17.276s 00:24:20.343 user 0m34.506s 00:24:20.343 sys 0m6.932s 00:24:20.343 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:20.343 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:20.343 ************************************ 00:24:20.343 END TEST nvmf_shutdown_tc1 00:24:20.343 ************************************ 00:24:20.343 15:29:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:24:20.343 15:29:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:20.343 15:29:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:20.343 15:29:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:20.343 15:29:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:20.343 ************************************ 00:24:20.343 START TEST nvmf_shutdown_tc2 00:24:20.343 ************************************ 00:24:20.343 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:24:20.343 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:24:20.343 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:20.343 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:20.344 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:20.344 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:20.344 Found net devices under 0000:31:00.0: cvl_0_0 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:20.344 Found net devices under 0000:31:00.1: cvl_0_1 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:20.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:24:20.344 00:24:20.344 --- 10.0.0.2 ping statistics --- 00:24:20.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.344 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:24:20.344 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:24:20.345 00:24:20.345 --- 10.0.0.1 ping statistics --- 00:24:20.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.345 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=781643 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 781643 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 781643 ']' 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:20.345 15:29:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.624 [2024-07-15 15:29:29.976378] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:20.624 [2024-07-15 15:29:29.976466] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.624 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.624 [2024-07-15 15:29:30.063506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.624 [2024-07-15 15:29:30.142059] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.624 [2024-07-15 15:29:30.142095] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.624 [2024-07-15 15:29:30.142103] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.624 [2024-07-15 15:29:30.142109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.624 [2024-07-15 15:29:30.142115] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.624 [2024-07-15 15:29:30.142218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.624 [2024-07-15 15:29:30.142377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.624 [2024-07-15 15:29:30.142735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:20.624 [2024-07-15 15:29:30.142736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.192 [2024-07-15 15:29:30.784380] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.192 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:21.450 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.450 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:21.450 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.450 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:21.450 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.450 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:21.450 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.451 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:21.451 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.451 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:21.451 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.451 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:21.451 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:21.451 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:21.451 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:21.451 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.451 15:29:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.451 Malloc1 00:24:21.451 [2024-07-15 15:29:30.884713] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.451 Malloc2 00:24:21.451 Malloc3 00:24:21.451 Malloc4 00:24:21.451 Malloc5 00:24:21.451 Malloc6 00:24:21.709 Malloc7 00:24:21.709 Malloc8 00:24:21.709 Malloc9 00:24:21.709 Malloc10 00:24:21.709 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=782030 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 782030 /var/tmp/bdevperf.sock 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 782030 ']' 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.710 { 00:24:21.710 "params": { 00:24:21.710 "name": "Nvme$subsystem", 00:24:21.710 "trtype": "$TEST_TRANSPORT", 00:24:21.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.710 "adrfam": "ipv4", 00:24:21.710 "trsvcid": "$NVMF_PORT", 00:24:21.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.710 "hdgst": ${hdgst:-false}, 00:24:21.710 "ddgst": ${ddgst:-false} 00:24:21.710 }, 00:24:21.710 "method": "bdev_nvme_attach_controller" 00:24:21.710 } 00:24:21.710 EOF 00:24:21.710 )") 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.710 { 00:24:21.710 "params": { 00:24:21.710 "name": "Nvme$subsystem", 00:24:21.710 "trtype": "$TEST_TRANSPORT", 00:24:21.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.710 "adrfam": "ipv4", 00:24:21.710 "trsvcid": "$NVMF_PORT", 00:24:21.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.710 "hdgst": ${hdgst:-false}, 00:24:21.710 "ddgst": ${ddgst:-false} 00:24:21.710 }, 00:24:21.710 "method": "bdev_nvme_attach_controller" 00:24:21.710 } 00:24:21.710 EOF 00:24:21.710 )") 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.710 { 00:24:21.710 "params": { 00:24:21.710 "name": "Nvme$subsystem", 00:24:21.710 "trtype": "$TEST_TRANSPORT", 00:24:21.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.710 "adrfam": "ipv4", 00:24:21.710 "trsvcid": "$NVMF_PORT", 00:24:21.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.710 "hdgst": ${hdgst:-false}, 00:24:21.710 "ddgst": ${ddgst:-false} 00:24:21.710 }, 00:24:21.710 "method": "bdev_nvme_attach_controller" 00:24:21.710 } 00:24:21.710 EOF 00:24:21.710 )") 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.710 { 00:24:21.710 "params": { 00:24:21.710 "name": "Nvme$subsystem", 00:24:21.710 "trtype": "$TEST_TRANSPORT", 00:24:21.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.710 "adrfam": "ipv4", 00:24:21.710 "trsvcid": "$NVMF_PORT", 00:24:21.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.710 "hdgst": ${hdgst:-false}, 00:24:21.710 "ddgst": ${ddgst:-false} 00:24:21.710 }, 00:24:21.710 "method": "bdev_nvme_attach_controller" 00:24:21.710 } 00:24:21.710 EOF 00:24:21.710 )") 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.710 { 00:24:21.710 "params": { 00:24:21.710 "name": "Nvme$subsystem", 00:24:21.710 "trtype": "$TEST_TRANSPORT", 00:24:21.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.710 "adrfam": "ipv4", 00:24:21.710 "trsvcid": "$NVMF_PORT", 00:24:21.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.710 "hdgst": ${hdgst:-false}, 00:24:21.710 "ddgst": ${ddgst:-false} 00:24:21.710 }, 00:24:21.710 "method": "bdev_nvme_attach_controller" 00:24:21.710 } 00:24:21.710 EOF 00:24:21.710 )") 00:24:21.710 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:21.968 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.968 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.968 { 00:24:21.968 "params": { 00:24:21.968 "name": "Nvme$subsystem", 00:24:21.968 "trtype": "$TEST_TRANSPORT", 00:24:21.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.968 "adrfam": "ipv4", 00:24:21.968 "trsvcid": "$NVMF_PORT", 00:24:21.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.968 "hdgst": ${hdgst:-false}, 00:24:21.968 "ddgst": ${ddgst:-false} 00:24:21.968 }, 00:24:21.968 "method": "bdev_nvme_attach_controller" 00:24:21.968 } 00:24:21.968 EOF 00:24:21.968 )") 00:24:21.968 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:21.968 [2024-07-15 15:29:31.336031] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:21.968 [2024-07-15 15:29:31.336082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782030 ] 00:24:21.968 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.968 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.968 { 00:24:21.968 "params": { 00:24:21.968 "name": "Nvme$subsystem", 00:24:21.968 "trtype": "$TEST_TRANSPORT", 00:24:21.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.968 "adrfam": "ipv4", 00:24:21.968 "trsvcid": "$NVMF_PORT", 00:24:21.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.968 "hdgst": ${hdgst:-false}, 00:24:21.968 "ddgst": ${ddgst:-false} 00:24:21.968 }, 00:24:21.968 "method": "bdev_nvme_attach_controller" 00:24:21.968 } 00:24:21.968 EOF 00:24:21.968 )") 00:24:21.968 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:21.968 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.968 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.968 { 00:24:21.968 "params": { 00:24:21.968 "name": "Nvme$subsystem", 00:24:21.968 "trtype": "$TEST_TRANSPORT", 00:24:21.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.968 "adrfam": "ipv4", 00:24:21.968 "trsvcid": "$NVMF_PORT", 00:24:21.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.968 "hdgst": ${hdgst:-false}, 00:24:21.968 "ddgst": ${ddgst:-false} 00:24:21.968 }, 00:24:21.968 "method": "bdev_nvme_attach_controller" 00:24:21.968 } 00:24:21.968 EOF 00:24:21.968 )") 00:24:21.968 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:21.968 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.968 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.968 { 00:24:21.968 "params": { 00:24:21.968 "name": "Nvme$subsystem", 00:24:21.968 "trtype": "$TEST_TRANSPORT", 00:24:21.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.969 "adrfam": "ipv4", 00:24:21.969 "trsvcid": "$NVMF_PORT", 00:24:21.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.969 "hdgst": ${hdgst:-false}, 00:24:21.969 "ddgst": ${ddgst:-false} 00:24:21.969 }, 00:24:21.969 "method": "bdev_nvme_attach_controller" 00:24:21.969 } 00:24:21.969 EOF 00:24:21.969 )") 00:24:21.969 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:21.969 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:21.969 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:21.969 { 00:24:21.969 "params": { 00:24:21.969 "name": "Nvme$subsystem", 00:24:21.969 "trtype": "$TEST_TRANSPORT", 00:24:21.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.969 "adrfam": "ipv4", 00:24:21.969 "trsvcid": "$NVMF_PORT", 00:24:21.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.969 "hdgst": ${hdgst:-false}, 00:24:21.969 "ddgst": ${ddgst:-false} 00:24:21.969 }, 00:24:21.969 "method": "bdev_nvme_attach_controller" 00:24:21.969 } 00:24:21.969 EOF 00:24:21.969 )") 00:24:21.969 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:21.969 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.969 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:24:21.969 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:24:21.969 15:29:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:21.969 "params": { 00:24:21.969 "name": "Nvme1", 00:24:21.969 "trtype": "tcp", 00:24:21.969 "traddr": "10.0.0.2", 00:24:21.969 "adrfam": "ipv4", 00:24:21.969 "trsvcid": "4420", 00:24:21.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.969 "hdgst": false, 00:24:21.969 "ddgst": false 00:24:21.969 }, 00:24:21.969 "method": "bdev_nvme_attach_controller" 00:24:21.969 },{ 00:24:21.969 "params": { 00:24:21.969 "name": "Nvme2", 00:24:21.969 "trtype": "tcp", 00:24:21.969 "traddr": "10.0.0.2", 00:24:21.969 "adrfam": "ipv4", 00:24:21.969 "trsvcid": "4420", 00:24:21.969 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:21.969 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:21.969 "hdgst": false, 00:24:21.969 "ddgst": false 00:24:21.969 }, 00:24:21.969 "method": "bdev_nvme_attach_controller" 00:24:21.969 },{ 00:24:21.969 "params": { 00:24:21.969 "name": "Nvme3", 00:24:21.969 "trtype": "tcp", 00:24:21.969 "traddr": "10.0.0.2", 00:24:21.969 "adrfam": "ipv4", 00:24:21.969 "trsvcid": "4420", 00:24:21.969 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:21.969 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:21.969 "hdgst": false, 00:24:21.969 "ddgst": false 00:24:21.969 }, 00:24:21.969 "method": "bdev_nvme_attach_controller" 00:24:21.969 },{ 00:24:21.969 "params": { 00:24:21.969 "name": "Nvme4", 00:24:21.969 "trtype": "tcp", 00:24:21.969 "traddr": "10.0.0.2", 00:24:21.969 "adrfam": "ipv4", 00:24:21.969 "trsvcid": "4420", 00:24:21.969 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:21.969 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:21.969 "hdgst": false, 00:24:21.969 "ddgst": false 00:24:21.969 }, 00:24:21.969 "method": "bdev_nvme_attach_controller" 00:24:21.969 },{ 00:24:21.969 "params": { 00:24:21.969 "name": "Nvme5", 00:24:21.969 "trtype": "tcp", 00:24:21.969 "traddr": "10.0.0.2", 00:24:21.969 "adrfam": "ipv4", 00:24:21.969 "trsvcid": "4420", 00:24:21.969 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:21.969 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:21.969 "hdgst": false, 00:24:21.969 "ddgst": false 00:24:21.969 }, 00:24:21.969 "method": "bdev_nvme_attach_controller" 00:24:21.969 },{ 00:24:21.969 "params": { 00:24:21.969 "name": "Nvme6", 00:24:21.969 "trtype": "tcp", 00:24:21.969 "traddr": "10.0.0.2", 00:24:21.969 "adrfam": "ipv4", 00:24:21.969 "trsvcid": "4420", 00:24:21.969 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:21.969 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:21.969 "hdgst": false, 00:24:21.969 "ddgst": false 00:24:21.969 }, 00:24:21.969 "method": "bdev_nvme_attach_controller" 00:24:21.969 },{ 00:24:21.969 "params": { 00:24:21.969 "name": "Nvme7", 00:24:21.969 "trtype": "tcp", 00:24:21.969 "traddr": "10.0.0.2", 00:24:21.969 "adrfam": "ipv4", 00:24:21.969 "trsvcid": "4420", 00:24:21.969 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:21.969 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:21.969 "hdgst": false, 00:24:21.969 "ddgst": false 00:24:21.969 }, 00:24:21.969 "method": "bdev_nvme_attach_controller" 00:24:21.969 },{ 00:24:21.969 "params": { 00:24:21.969 "name": "Nvme8", 00:24:21.969 "trtype": "tcp", 00:24:21.969 "traddr": "10.0.0.2", 00:24:21.969 "adrfam": "ipv4", 00:24:21.969 "trsvcid": "4420", 00:24:21.969 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:21.969 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:21.969 "hdgst": false, 00:24:21.969 "ddgst": false 00:24:21.969 }, 00:24:21.969 "method": "bdev_nvme_attach_controller" 00:24:21.969 },{ 00:24:21.969 "params": { 00:24:21.969 "name": "Nvme9", 00:24:21.969 "trtype": "tcp", 00:24:21.969 "traddr": "10.0.0.2", 00:24:21.969 "adrfam": "ipv4", 00:24:21.969 "trsvcid": "4420", 00:24:21.969 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:21.969 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:21.969 "hdgst": false, 00:24:21.969 "ddgst": false 00:24:21.969 }, 00:24:21.969 "method": "bdev_nvme_attach_controller" 00:24:21.969 },{ 00:24:21.969 "params": { 00:24:21.969 "name": "Nvme10", 00:24:21.969 "trtype": "tcp", 00:24:21.969 "traddr": "10.0.0.2", 00:24:21.969 "adrfam": "ipv4", 00:24:21.969 "trsvcid": "4420", 00:24:21.969 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:21.969 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:21.969 "hdgst": false, 00:24:21.969 "ddgst": false 00:24:21.969 }, 00:24:21.969 "method": "bdev_nvme_attach_controller" 00:24:21.969 }' 00:24:21.969 [2024-07-15 15:29:31.400432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.969 [2024-07-15 15:29:31.464910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.870 Running I/O for 10 seconds... 00:24:23.870 15:29:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.870 15:29:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:23.870 15:29:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:23.870 15:29:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.870 15:29:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.870 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.870 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:23.870 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:23.870 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:23.870 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:24:23.870 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:24:23.870 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:23.870 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:23.870 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:23.871 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:23.871 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.871 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.871 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.871 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:23.871 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:23.871 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:23.871 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:23.871 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:23.871 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:23.871 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:23.871 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.871 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.129 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.129 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:24.129 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:24.129 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 782030 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 782030 ']' 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 782030 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 782030 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 782030' 00:24:24.388 killing process with pid 782030 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 782030 00:24:24.388 15:29:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 782030 00:24:24.388 Received shutdown signal, test time was about 0.982126 seconds 00:24:24.388 00:24:24.388 Latency(us) 00:24:24.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.388 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.388 Verification LBA range: start 0x0 length 0x400 00:24:24.388 Nvme1n1 : 0.98 262.42 16.40 0.00 0.00 241012.91 16165.55 242920.11 00:24:24.388 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.388 Verification LBA range: start 0x0 length 0x400 00:24:24.388 Nvme2n1 : 0.97 198.79 12.42 0.00 0.00 311765.33 19442.35 269134.51 00:24:24.388 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.388 Verification LBA range: start 0x0 length 0x400 00:24:24.388 Nvme3n1 : 0.96 266.04 16.63 0.00 0.00 227979.52 10758.83 256901.12 00:24:24.388 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.388 Verification LBA range: start 0x0 length 0x400 00:24:24.388 Nvme4n1 : 0.94 209.55 13.10 0.00 0.00 280802.04 3904.85 251658.24 00:24:24.388 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.388 Verification LBA range: start 0x0 length 0x400 00:24:24.388 Nvme5n1 : 0.97 263.35 16.46 0.00 0.00 220825.60 38010.88 225443.84 00:24:24.388 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.388 Verification LBA range: start 0x0 length 0x400 00:24:24.388 Nvme6n1 : 0.96 200.50 12.53 0.00 0.00 283048.96 20097.71 265639.25 00:24:24.388 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.388 Verification LBA range: start 0x0 length 0x400 00:24:24.388 Nvme7n1 : 0.97 264.13 16.51 0.00 0.00 210467.20 22828.37 244667.73 00:24:24.388 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.388 Verification LBA range: start 0x0 length 0x400 00:24:24.389 Nvme8n1 : 0.95 201.62 12.60 0.00 0.00 268595.48 20862.29 248162.99 00:24:24.389 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.389 Verification LBA range: start 0x0 length 0x400 00:24:24.389 Nvme9n1 : 0.98 261.69 16.36 0.00 0.00 202857.60 24029.87 246415.36 00:24:24.389 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:24.389 Verification LBA range: start 0x0 length 0x400 00:24:24.389 Nvme10n1 : 0.98 260.89 16.31 0.00 0.00 198870.19 14199.47 235929.60 00:24:24.389 =================================================================================================================== 00:24:24.389 Total : 2388.99 149.31 0.00 0.00 240125.03 3904.85 269134.51 00:24:24.648 15:29:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 781643 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:25.586 rmmod nvme_tcp 00:24:25.586 rmmod nvme_fabrics 00:24:25.586 rmmod nvme_keyring 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 781643 ']' 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 781643 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 781643 ']' 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 781643 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:25.586 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 781643 00:24:25.845 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:25.845 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:25.845 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 781643' 00:24:25.845 killing process with pid 781643 00:24:25.845 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 781643 00:24:25.845 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 781643 00:24:26.103 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:26.103 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:26.103 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:26.103 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:26.103 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:26.103 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.103 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.103 15:29:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.010 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:28.010 00:24:28.010 real 0m8.060s 00:24:28.010 user 0m24.309s 00:24:28.010 sys 0m1.287s 00:24:28.010 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:28.010 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:28.010 ************************************ 00:24:28.010 END TEST nvmf_shutdown_tc2 00:24:28.010 ************************************ 00:24:28.010 15:29:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:24:28.010 15:29:37 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:28.011 15:29:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:28.011 15:29:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:28.011 15:29:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:28.271 ************************************ 00:24:28.271 START TEST nvmf_shutdown_tc3 00:24:28.271 ************************************ 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:28.271 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:28.271 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:28.271 Found net devices under 0000:31:00.0: cvl_0_0 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.271 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:28.272 Found net devices under 0000:31:00.1: cvl_0_1 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:28.272 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.532 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.532 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.532 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:28.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:24:28.532 00:24:28.532 --- 10.0.0.2 ping statistics --- 00:24:28.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.532 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:24:28.532 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:24:28.532 00:24:28.532 --- 10.0.0.1 ping statistics --- 00:24:28.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.532 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:24:28.532 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.532 15:29:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=783484 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 783484 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 783484 ']' 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.532 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:28.532 [2024-07-15 15:29:38.103485] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:28.532 [2024-07-15 15:29:38.103553] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.532 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.792 [2024-07-15 15:29:38.178943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.792 [2024-07-15 15:29:38.252656] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.792 [2024-07-15 15:29:38.252687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.792 [2024-07-15 15:29:38.252695] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.792 [2024-07-15 15:29:38.252704] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.792 [2024-07-15 15:29:38.252710] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.792 [2024-07-15 15:29:38.252813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.792 [2024-07-15 15:29:38.252948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.792 [2024-07-15 15:29:38.253371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.792 [2024-07-15 15:29:38.253370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:29.361 [2024-07-15 15:29:38.914466] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.361 15:29:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:29.620 Malloc1 00:24:29.620 [2024-07-15 15:29:39.014793] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.620 Malloc2 00:24:29.620 Malloc3 00:24:29.620 Malloc4 00:24:29.620 Malloc5 00:24:29.620 Malloc6 00:24:29.620 Malloc7 00:24:29.883 Malloc8 00:24:29.883 Malloc9 00:24:29.883 Malloc10 00:24:29.883 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.883 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:29.883 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:29.883 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:29.883 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=783869 00:24:29.883 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 783869 /var/tmp/bdevperf.sock 00:24:29.883 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 783869 ']' 00:24:29.883 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.883 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.883 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.883 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:29.883 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:29.884 { 00:24:29.884 "params": { 00:24:29.884 "name": "Nvme$subsystem", 00:24:29.884 "trtype": "$TEST_TRANSPORT", 00:24:29.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.884 "adrfam": "ipv4", 00:24:29.884 "trsvcid": "$NVMF_PORT", 00:24:29.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.884 "hdgst": ${hdgst:-false}, 00:24:29.884 "ddgst": ${ddgst:-false} 00:24:29.884 }, 00:24:29.884 "method": "bdev_nvme_attach_controller" 00:24:29.884 } 00:24:29.884 EOF 00:24:29.884 )") 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:29.884 { 00:24:29.884 "params": { 00:24:29.884 "name": "Nvme$subsystem", 00:24:29.884 "trtype": "$TEST_TRANSPORT", 00:24:29.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.884 "adrfam": "ipv4", 00:24:29.884 "trsvcid": "$NVMF_PORT", 00:24:29.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.884 "hdgst": ${hdgst:-false}, 00:24:29.884 "ddgst": ${ddgst:-false} 00:24:29.884 }, 00:24:29.884 "method": "bdev_nvme_attach_controller" 00:24:29.884 } 00:24:29.884 EOF 00:24:29.884 )") 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:29.884 { 00:24:29.884 "params": { 00:24:29.884 "name": "Nvme$subsystem", 00:24:29.884 "trtype": "$TEST_TRANSPORT", 00:24:29.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.884 "adrfam": "ipv4", 00:24:29.884 "trsvcid": "$NVMF_PORT", 00:24:29.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.884 "hdgst": ${hdgst:-false}, 00:24:29.884 "ddgst": ${ddgst:-false} 00:24:29.884 }, 00:24:29.884 "method": "bdev_nvme_attach_controller" 00:24:29.884 } 00:24:29.884 EOF 00:24:29.884 )") 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:29.884 { 00:24:29.884 "params": { 00:24:29.884 "name": "Nvme$subsystem", 00:24:29.884 "trtype": "$TEST_TRANSPORT", 00:24:29.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.884 "adrfam": "ipv4", 00:24:29.884 "trsvcid": "$NVMF_PORT", 00:24:29.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.884 "hdgst": ${hdgst:-false}, 00:24:29.884 "ddgst": ${ddgst:-false} 00:24:29.884 }, 00:24:29.884 "method": "bdev_nvme_attach_controller" 00:24:29.884 } 00:24:29.884 EOF 00:24:29.884 )") 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:29.884 { 00:24:29.884 "params": { 00:24:29.884 "name": "Nvme$subsystem", 00:24:29.884 "trtype": "$TEST_TRANSPORT", 00:24:29.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.884 "adrfam": "ipv4", 00:24:29.884 "trsvcid": "$NVMF_PORT", 00:24:29.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.884 "hdgst": ${hdgst:-false}, 00:24:29.884 "ddgst": ${ddgst:-false} 00:24:29.884 }, 00:24:29.884 "method": "bdev_nvme_attach_controller" 00:24:29.884 } 00:24:29.884 EOF 00:24:29.884 )") 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:29.884 { 00:24:29.884 "params": { 00:24:29.884 "name": "Nvme$subsystem", 00:24:29.884 "trtype": "$TEST_TRANSPORT", 00:24:29.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.884 "adrfam": "ipv4", 00:24:29.884 "trsvcid": "$NVMF_PORT", 00:24:29.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.884 "hdgst": ${hdgst:-false}, 00:24:29.884 "ddgst": ${ddgst:-false} 00:24:29.884 }, 00:24:29.884 "method": "bdev_nvme_attach_controller" 00:24:29.884 } 00:24:29.884 EOF 00:24:29.884 )") 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:29.884 [2024-07-15 15:29:39.465437] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:29.884 [2024-07-15 15:29:39.465489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783869 ] 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:29.884 { 00:24:29.884 "params": { 00:24:29.884 "name": "Nvme$subsystem", 00:24:29.884 "trtype": "$TEST_TRANSPORT", 00:24:29.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.884 "adrfam": "ipv4", 00:24:29.884 "trsvcid": "$NVMF_PORT", 00:24:29.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.884 "hdgst": ${hdgst:-false}, 00:24:29.884 "ddgst": ${ddgst:-false} 00:24:29.884 }, 00:24:29.884 "method": "bdev_nvme_attach_controller" 00:24:29.884 } 00:24:29.884 EOF 00:24:29.884 )") 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:29.884 { 00:24:29.884 "params": { 00:24:29.884 "name": "Nvme$subsystem", 00:24:29.884 "trtype": "$TEST_TRANSPORT", 00:24:29.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.884 "adrfam": "ipv4", 00:24:29.884 "trsvcid": "$NVMF_PORT", 00:24:29.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.884 "hdgst": ${hdgst:-false}, 00:24:29.884 "ddgst": ${ddgst:-false} 00:24:29.884 }, 00:24:29.884 "method": "bdev_nvme_attach_controller" 00:24:29.884 } 00:24:29.884 EOF 00:24:29.884 )") 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:29.884 { 00:24:29.884 "params": { 00:24:29.884 "name": "Nvme$subsystem", 00:24:29.884 "trtype": "$TEST_TRANSPORT", 00:24:29.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.884 "adrfam": "ipv4", 00:24:29.884 "trsvcid": "$NVMF_PORT", 00:24:29.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.884 "hdgst": ${hdgst:-false}, 00:24:29.884 "ddgst": ${ddgst:-false} 00:24:29.884 }, 00:24:29.884 "method": "bdev_nvme_attach_controller" 00:24:29.884 } 00:24:29.884 EOF 00:24:29.884 )") 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:29.884 { 00:24:29.884 "params": { 00:24:29.884 "name": "Nvme$subsystem", 00:24:29.884 "trtype": "$TEST_TRANSPORT", 00:24:29.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.884 "adrfam": "ipv4", 00:24:29.884 "trsvcid": "$NVMF_PORT", 00:24:29.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.884 "hdgst": ${hdgst:-false}, 00:24:29.884 "ddgst": ${ddgst:-false} 00:24:29.884 }, 00:24:29.884 "method": "bdev_nvme_attach_controller" 00:24:29.884 } 00:24:29.884 EOF 00:24:29.884 )") 00:24:29.884 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:24:29.884 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:24:30.178 15:29:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:30.178 "params": { 00:24:30.178 "name": "Nvme1", 00:24:30.178 "trtype": "tcp", 00:24:30.178 "traddr": "10.0.0.2", 00:24:30.178 "adrfam": "ipv4", 00:24:30.178 "trsvcid": "4420", 00:24:30.178 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.178 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:30.178 "hdgst": false, 00:24:30.178 "ddgst": false 00:24:30.178 }, 00:24:30.178 "method": "bdev_nvme_attach_controller" 00:24:30.178 },{ 00:24:30.178 "params": { 00:24:30.178 "name": "Nvme2", 00:24:30.178 "trtype": "tcp", 00:24:30.178 "traddr": "10.0.0.2", 00:24:30.178 "adrfam": "ipv4", 00:24:30.178 "trsvcid": "4420", 00:24:30.178 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:30.178 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:30.178 "hdgst": false, 00:24:30.178 "ddgst": false 00:24:30.178 }, 00:24:30.178 "method": "bdev_nvme_attach_controller" 00:24:30.178 },{ 00:24:30.178 "params": { 00:24:30.178 "name": "Nvme3", 00:24:30.178 "trtype": "tcp", 00:24:30.178 "traddr": "10.0.0.2", 00:24:30.178 "adrfam": "ipv4", 00:24:30.178 "trsvcid": "4420", 00:24:30.178 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:30.178 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:30.178 "hdgst": false, 00:24:30.178 "ddgst": false 00:24:30.178 }, 00:24:30.178 "method": "bdev_nvme_attach_controller" 00:24:30.178 },{ 00:24:30.178 "params": { 00:24:30.178 "name": "Nvme4", 00:24:30.179 "trtype": "tcp", 00:24:30.179 "traddr": "10.0.0.2", 00:24:30.179 "adrfam": "ipv4", 00:24:30.179 "trsvcid": "4420", 00:24:30.179 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:30.179 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:30.179 "hdgst": false, 00:24:30.179 "ddgst": false 00:24:30.179 }, 00:24:30.179 "method": "bdev_nvme_attach_controller" 00:24:30.179 },{ 00:24:30.179 "params": { 00:24:30.179 "name": "Nvme5", 00:24:30.179 "trtype": "tcp", 00:24:30.179 "traddr": "10.0.0.2", 00:24:30.179 "adrfam": "ipv4", 00:24:30.179 "trsvcid": "4420", 00:24:30.179 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:30.179 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:30.179 "hdgst": false, 00:24:30.179 "ddgst": false 00:24:30.179 }, 00:24:30.179 "method": "bdev_nvme_attach_controller" 00:24:30.179 },{ 00:24:30.179 "params": { 00:24:30.179 "name": "Nvme6", 00:24:30.179 "trtype": "tcp", 00:24:30.179 "traddr": "10.0.0.2", 00:24:30.179 "adrfam": "ipv4", 00:24:30.179 "trsvcid": "4420", 00:24:30.179 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:30.179 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:30.179 "hdgst": false, 00:24:30.179 "ddgst": false 00:24:30.179 }, 00:24:30.179 "method": "bdev_nvme_attach_controller" 00:24:30.179 },{ 00:24:30.179 "params": { 00:24:30.179 "name": "Nvme7", 00:24:30.179 "trtype": "tcp", 00:24:30.179 "traddr": "10.0.0.2", 00:24:30.179 "adrfam": "ipv4", 00:24:30.179 "trsvcid": "4420", 00:24:30.179 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:30.179 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:30.179 "hdgst": false, 00:24:30.179 "ddgst": false 00:24:30.179 }, 00:24:30.179 "method": "bdev_nvme_attach_controller" 00:24:30.179 },{ 00:24:30.179 "params": { 00:24:30.179 "name": "Nvme8", 00:24:30.179 "trtype": "tcp", 00:24:30.179 "traddr": "10.0.0.2", 00:24:30.179 "adrfam": "ipv4", 00:24:30.179 "trsvcid": "4420", 00:24:30.179 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:30.179 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:30.179 "hdgst": false, 00:24:30.179 "ddgst": false 00:24:30.179 }, 00:24:30.179 "method": "bdev_nvme_attach_controller" 00:24:30.179 },{ 00:24:30.179 "params": { 00:24:30.179 "name": "Nvme9", 00:24:30.179 "trtype": "tcp", 00:24:30.179 "traddr": "10.0.0.2", 00:24:30.179 "adrfam": "ipv4", 00:24:30.179 "trsvcid": "4420", 00:24:30.179 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:30.179 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:30.179 "hdgst": false, 00:24:30.179 "ddgst": false 00:24:30.179 }, 00:24:30.179 "method": "bdev_nvme_attach_controller" 00:24:30.179 },{ 00:24:30.179 "params": { 00:24:30.179 "name": "Nvme10", 00:24:30.179 "trtype": "tcp", 00:24:30.179 "traddr": "10.0.0.2", 00:24:30.179 "adrfam": "ipv4", 00:24:30.179 "trsvcid": "4420", 00:24:30.179 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:30.179 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:30.179 "hdgst": false, 00:24:30.179 "ddgst": false 00:24:30.179 }, 00:24:30.179 "method": "bdev_nvme_attach_controller" 00:24:30.179 }' 00:24:30.179 [2024-07-15 15:29:39.529690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.179 [2024-07-15 15:29:39.594263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.573 Running I/O for 10 seconds... 00:24:31.573 15:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:31.573 15:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:24:31.573 15:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:31.573 15:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.573 15:29:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:31.573 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.573 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:31.573 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:31.574 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:31.574 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:31.574 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:24:31.574 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:24:31.574 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:31.574 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:31.574 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:31.574 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:31.574 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.574 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:31.833 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.833 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:31.833 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:31.833 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:32.092 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:32.092 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:32.092 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:32.092 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:32.092 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.092 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:32.092 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.092 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=72 00:24:32.092 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 72 -ge 100 ']' 00:24:32.093 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=136 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 783484 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 783484 ']' 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 783484 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 783484 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 783484' 00:24:32.371 killing process with pid 783484 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 783484 00:24:32.371 15:29:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 783484 00:24:32.371 [2024-07-15 15:29:41.876549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f470 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.876592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f470 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.876598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f470 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.876603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f470 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877958] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.877996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878024] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878033] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878053] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878067] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878104] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878112] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.371 [2024-07-15 15:29:41.878130] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.878135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.878140] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.878144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.878148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.878153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.878157] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.878161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.878165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.878172] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.878176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.878181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.878186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.878190] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.878194] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061e50 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879353] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879363] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879368] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879388] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879392] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879402] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879420] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879442] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879458] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879467] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879485] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879517] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879522] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879556] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879602] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.879645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f910 is same with the state(5) to be set 00:24:32.372 [2024-07-15 15:29:41.880967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.880989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.880995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881008] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881018] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881022] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881036] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881041] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881049] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881072] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881081] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881099] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881112] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881117] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881126] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881130] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881143] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881156] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881165] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881171] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881206] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881210] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881223] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881240] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.881266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205fdb0 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882287] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882327] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882339] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882353] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882386] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882420] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882435] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.373 [2024-07-15 15:29:41.882447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882497] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882503] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882510] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882591] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882604] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882610] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882676] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.882694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060270 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883538] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883566] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883580] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883584] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883599] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883604] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883608] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883612] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883637] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883641] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883665] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883706] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883710] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883740] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.374 [2024-07-15 15:29:41.883768] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.883773] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.883777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.883781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.883786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.883790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.883795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.883799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.883803] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.883807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060710 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884634] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884663] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884668] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884682] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884752] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884829] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884870] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884898] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.884916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2060bb0 is same with the state(5) to be set 00:24:32.375 [2024-07-15 15:29:41.885981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.885998] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886018] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886035] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886042] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886089] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886116] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886122] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886128] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886141] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886167] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886181] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886194] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886209] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886228] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886328] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886386] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886393] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886399] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.886405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2061510 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887047] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887066] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887075] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887079] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887083] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887088] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887096] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887101] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887105] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887125] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887129] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887138] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.376 [2024-07-15 15:29:41.887142] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.377 [2024-07-15 15:29:41.887147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.377 [2024-07-15 15:29:41.887151] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.377 [2024-07-15 15:29:41.887155] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.377 [2024-07-15 15:29:41.887159] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.377 [2024-07-15 15:29:41.887164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.377 [2024-07-15 15:29:41.887168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.377 [2024-07-15 15:29:41.894995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1195690 is same with the state(5) to be set 00:24:32.377 [2024-07-15 15:29:41.895120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190180 is same with the state(5) to be set 00:24:32.377 [2024-07-15 15:29:41.895212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cc1d0 is same with the state(5) to be set 00:24:32.377 [2024-07-15 15:29:41.895296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11803f0 is same with the state(5) to be set 00:24:32.377 [2024-07-15 15:29:41.895376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e630 is same with the state(5) to be set 00:24:32.377 [2024-07-15 15:29:41.895459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc75610 is same with the state(5) to be set 00:24:32.377 [2024-07-15 15:29:41.895542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1181540 is same with the state(5) to be set 00:24:32.377 [2024-07-15 15:29:41.895624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66090 is same with the state(5) to be set 00:24:32.377 [2024-07-15 15:29:41.895707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.377 [2024-07-15 15:29:41.895745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.377 [2024-07-15 15:29:41.895752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.378 [2024-07-15 15:29:41.895759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.895766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d290 is same with the state(5) to be set 00:24:32.378 [2024-07-15 15:29:41.895841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.895850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.895864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.895872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.895882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.895895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.895904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.895911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.895920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.895927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.895939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.895946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.895956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.895962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.895972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.895979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.895988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.895996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.378 [2024-07-15 15:29:41.896499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.378 [2024-07-15 15:29:41.896508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.896892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.896948] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd60840 was disconnected and freed. reset controller. 00:24:32.379 [2024-07-15 15:29:41.897134] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897176] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:12[2024-07-15 15:29:41.897207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.379 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.897213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 [2024-07-15 15:29:41.897219] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.897235] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 15:29:41.897241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 [2024-07-15 15:29:41.897254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 15:29:41.897260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.379 the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:12[2024-07-15 15:29:41.897272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.379 the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.379 [2024-07-15 15:29:41.897279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.380 [2024-07-15 15:29:41.897288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.380 [2024-07-15 15:29:41.897289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.380 [2024-07-15 15:29:41.897297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 15:29:41.897298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 the state(5) to be set 00:24:32.380 [2024-07-15 15:29:41.897305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.380 [2024-07-15 15:29:41.897308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20619b0 is same with the state(5) to be set 00:24:32.380 [2024-07-15 15:29:41.897316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.897773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.897783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.906119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.906162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.906171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.906181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.906189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.906198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.906205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.906215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.906222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.906232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.906239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.906248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.906255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.906265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.906272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.906286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.906293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.380 [2024-07-15 15:29:41.906302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.380 [2024-07-15 15:29:41.906309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd1e0 is same with the state(5) to be set 00:24:32.381 [2024-07-15 15:29:41.906699] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12dd1e0 was disconnected and freed. reset controller. 00:24:32.381 [2024-07-15 15:29:41.906799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.906989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.906996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.907005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.907012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.907022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.907028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.907038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.907045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.907054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.907063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.907072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.907079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.907088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.907095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.907104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.907111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.907120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.907127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.907136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.907143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.907152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.381 [2024-07-15 15:29:41.907159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.381 [2024-07-15 15:29:41.907168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.382 [2024-07-15 15:29:41.907845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.382 [2024-07-15 15:29:41.907854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.907861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.907933] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12df920 was disconnected and freed. reset controller. 00:24:32.383 [2024-07-15 15:29:41.908039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1195690 (9): Bad file descriptor 00:24:32.383 [2024-07-15 15:29:41.908073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.383 [2024-07-15 15:29:41.908083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.908091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.383 [2024-07-15 15:29:41.908098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.908106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.383 [2024-07-15 15:29:41.908113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.908121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:32.383 [2024-07-15 15:29:41.908128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.908135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c3c90 is same with the state(5) to be set 00:24:32.383 [2024-07-15 15:29:41.908147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1190180 (9): Bad file descriptor 00:24:32.383 [2024-07-15 15:29:41.908159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cc1d0 (9): Bad file descriptor 00:24:32.383 [2024-07-15 15:29:41.908171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11803f0 (9): Bad file descriptor 00:24:32.383 [2024-07-15 15:29:41.908187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119e630 (9): Bad file descriptor 00:24:32.383 [2024-07-15 15:29:41.908206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc75610 (9): Bad file descriptor 00:24:32.383 [2024-07-15 15:29:41.908226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1181540 (9): Bad file descriptor 00:24:32.383 [2024-07-15 15:29:41.908242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd66090 (9): Bad file descriptor 00:24:32.383 [2024-07-15 15:29:41.908254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133d290 (9): Bad file descriptor 00:24:32.383 [2024-07-15 15:29:41.912107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.383 [2024-07-15 15:29:41.912460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:32.383 [2024-07-15 15:29:41.912482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:32.383 [2024-07-15 15:29:41.912494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c3c90 (9): Bad file descriptor 00:24:32.383 [2024-07-15 15:29:41.912876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.383 [2024-07-15 15:29:41.912896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd66090 with addr=10.0.0.2, port=4420 00:24:32.383 [2024-07-15 15:29:41.912905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66090 is same with the state(5) to be set 00:24:32.383 [2024-07-15 15:29:41.913659] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:32.383 [2024-07-15 15:29:41.913970] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:32.383 [2024-07-15 15:29:41.914359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.383 [2024-07-15 15:29:41.914397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1190180 with addr=10.0.0.2, port=4420 00:24:32.383 [2024-07-15 15:29:41.914414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190180 is same with the state(5) to be set 00:24:32.383 [2024-07-15 15:29:41.914447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd66090 (9): Bad file descriptor 00:24:32.383 [2024-07-15 15:29:41.914518] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:32.383 [2024-07-15 15:29:41.914601] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:32.383 [2024-07-15 15:29:41.914639] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:32.383 [2024-07-15 15:29:41.914678] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:32.383 [2024-07-15 15:29:41.915111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.383 [2024-07-15 15:29:41.915149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3c90 with addr=10.0.0.2, port=4420 00:24:32.383 [2024-07-15 15:29:41.915161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c3c90 is same with the state(5) to be set 00:24:32.383 [2024-07-15 15:29:41.915177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1190180 (9): Bad file descriptor 00:24:32.383 [2024-07-15 15:29:41.915188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.383 [2024-07-15 15:29:41.915196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.383 [2024-07-15 15:29:41.915205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.383 [2024-07-15 15:29:41.915316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.383 [2024-07-15 15:29:41.915328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c3c90 (9): Bad file descriptor 00:24:32.383 [2024-07-15 15:29:41.915336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:32.383 [2024-07-15 15:29:41.915343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:32.383 [2024-07-15 15:29:41.915349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:32.383 [2024-07-15 15:29:41.915387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.383 [2024-07-15 15:29:41.915674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.383 [2024-07-15 15:29:41.915683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.915989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.915996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.384 [2024-07-15 15:29:41.916336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.384 [2024-07-15 15:29:41.916343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.916352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.916359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.916368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.916375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.916385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.916392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.916401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.916408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.916417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.916424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.916433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.916440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.916449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.916457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.916465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132ea60 is same with the state(5) to be set 00:24:32.385 [2024-07-15 15:29:41.916504] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x132ea60 was disconnected and freed. reset controller. 00:24:32.385 [2024-07-15 15:29:41.916538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.385 [2024-07-15 15:29:41.916548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:32.385 [2024-07-15 15:29:41.916554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:32.385 [2024-07-15 15:29:41.916561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:32.385 [2024-07-15 15:29:41.917826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.385 [2024-07-15 15:29:41.917839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:32.385 [2024-07-15 15:29:41.918152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.385 [2024-07-15 15:29:41.918171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11803f0 with addr=10.0.0.2, port=4420 00:24:32.385 [2024-07-15 15:29:41.918181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11803f0 is same with the state(5) to be set 00:24:32.385 [2024-07-15 15:29:41.918481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11803f0 (9): Bad file descriptor 00:24:32.385 [2024-07-15 15:29:41.918587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:32.385 [2024-07-15 15:29:41.918595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:32.385 [2024-07-15 15:29:41.918602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:32.385 [2024-07-15 15:29:41.918637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.918986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.918995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.919002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.919011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.919019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.919029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.919036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.919045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.919052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.919061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.919068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.919077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.919084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.919094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.919101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.385 [2024-07-15 15:29:41.919110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.385 [2024-07-15 15:29:41.919117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.919696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.919704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd61b40 is same with the state(5) to be set 00:24:32.386 [2024-07-15 15:29:41.920976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.920989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.921001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.921010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.921020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.921028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.921039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.921047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.921058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.921066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.921077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.921085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.386 [2024-07-15 15:29:41.921095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.386 [2024-07-15 15:29:41.921102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.387 [2024-07-15 15:29:41.921775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.387 [2024-07-15 15:29:41.921782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.921791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.921798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.921807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.921814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.921823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.921829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.921838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.921845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.921854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.921861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.921870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.921877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.921892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.921900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.921909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.921915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.921924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.921931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.921940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.921949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.921958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.921964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.921974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.921981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.921990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.921997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.922006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.922013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.922022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.922029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.922037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132f500 is same with the state(5) to be set 00:24:32.388 [2024-07-15 15:29:41.923299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.388 [2024-07-15 15:29:41.923655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.388 [2024-07-15 15:29:41.923662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.923988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.923996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.389 [2024-07-15 15:29:41.924339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.389 [2024-07-15 15:29:41.924348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.924355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.924363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1330990 is same with the state(5) to be set 00:24:32.390 [2024-07-15 15:29:41.925625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.925986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.925996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.390 [2024-07-15 15:29:41.926314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.390 [2024-07-15 15:29:41.926323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.926738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.926746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dbd50 is same with the state(5) to be set 00:24:32.391 [2024-07-15 15:29:41.928034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.391 [2024-07-15 15:29:41.928488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.391 [2024-07-15 15:29:41.928495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.928976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.928989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.929002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.929012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.929019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.929028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.929035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.929045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.929053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.929065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.929076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.929090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.929100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.929114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.929125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.929140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.929151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.929166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.929178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.929192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.392 [2024-07-15 15:29:41.929204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.392 [2024-07-15 15:29:41.929217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.929229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.929244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.929254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.929270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.929281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.929296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.929309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.929325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.929338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.929353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.929366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.929382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.929395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.929410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.929423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.929436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12de670 is same with the state(5) to be set 00:24:32.393 [2024-07-15 15:29:41.931456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.931989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.931998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.932005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.932015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.932022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.932031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.393 [2024-07-15 15:29:41.932038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.393 [2024-07-15 15:29:41.932048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.394 [2024-07-15 15:29:41.932532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:32.394 [2024-07-15 15:29:41.932540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e0dd0 is same with the state(5) to be set 00:24:32.394 [2024-07-15 15:29:41.934322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.394 [2024-07-15 15:29:41.934341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:32.394 [2024-07-15 15:29:41.934352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:32.394 [2024-07-15 15:29:41.934361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:32.394 [2024-07-15 15:29:41.934430] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:32.394 [2024-07-15 15:29:41.934444] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:32.394 [2024-07-15 15:29:41.934455] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:32.394 [2024-07-15 15:29:41.934526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:32.394 [2024-07-15 15:29:41.934536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:32.394 task offset: 25216 on job bdev=Nvme1n1 fails 00:24:32.394 00:24:32.394 Latency(us) 00:24:32.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.394 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.394 Job: Nvme1n1 ended in about 0.93 seconds with error 00:24:32.394 Verification LBA range: start 0x0 length 0x400 00:24:32.394 Nvme1n1 : 0.93 206.00 12.87 68.67 0.00 230278.83 24576.00 241172.48 00:24:32.394 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.394 Job: Nvme2n1 ended in about 0.94 seconds with error 00:24:32.394 Verification LBA range: start 0x0 length 0x400 00:24:32.394 Nvme2n1 : 0.94 135.67 8.48 67.83 0.00 304508.87 24576.00 284863.15 00:24:32.394 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.394 Job: Nvme3n1 ended in about 0.94 seconds with error 00:24:32.394 Verification LBA range: start 0x0 length 0x400 00:24:32.394 Nvme3n1 : 0.94 204.17 12.76 68.06 0.00 222648.64 13762.56 284863.15 00:24:32.394 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.394 Job: Nvme4n1 ended in about 0.95 seconds with error 00:24:32.394 Verification LBA range: start 0x0 length 0x400 00:24:32.394 Nvme4n1 : 0.95 135.33 8.46 67.67 0.00 292290.56 18677.76 260396.37 00:24:32.394 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.394 Job: Nvme5n1 ended in about 0.95 seconds with error 00:24:32.394 Verification LBA range: start 0x0 length 0x400 00:24:32.394 Nvme5n1 : 0.95 135.00 8.44 67.50 0.00 286660.84 20753.07 274377.39 00:24:32.394 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.394 Job: Nvme6n1 ended in about 0.95 seconds with error 00:24:32.394 Verification LBA range: start 0x0 length 0x400 00:24:32.394 Nvme6n1 : 0.95 134.66 8.42 67.33 0.00 281055.57 23811.41 290106.03 00:24:32.394 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.395 Job: Nvme7n1 ended in about 0.93 seconds with error 00:24:32.395 Verification LBA range: start 0x0 length 0x400 00:24:32.395 Nvme7n1 : 0.93 205.69 12.86 68.56 0.00 201555.20 15182.51 295348.91 00:24:32.395 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.395 Job: Nvme8n1 ended in about 0.95 seconds with error 00:24:32.395 Verification LBA range: start 0x0 length 0x400 00:24:32.395 Nvme8n1 : 0.95 134.28 8.39 67.14 0.00 269030.12 17803.95 277872.64 00:24:32.395 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.395 Job: Nvme9n1 ended in about 0.93 seconds with error 00:24:32.395 Verification LBA range: start 0x0 length 0x400 00:24:32.395 Nvme9n1 : 0.93 136.95 8.56 68.48 0.00 256312.89 16165.55 298844.16 00:24:32.395 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:32.395 Job: Nvme10n1 ended in about 0.96 seconds with error 00:24:32.395 Verification LBA range: start 0x0 length 0x400 00:24:32.395 Nvme10n1 : 0.96 133.85 8.37 66.92 0.00 257155.41 20971.52 265639.25 00:24:32.395 =================================================================================================================== 00:24:32.395 Total : 1561.61 97.60 678.16 0.00 256332.53 13762.56 298844.16 00:24:32.395 [2024-07-15 15:29:41.962754] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:32.395 [2024-07-15 15:29:41.962789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:32.395 [2024-07-15 15:29:41.963273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.395 [2024-07-15 15:29:41.963290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x133d290 with addr=10.0.0.2, port=4420 00:24:32.395 [2024-07-15 15:29:41.963299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d290 is same with the state(5) to be set 00:24:32.395 [2024-07-15 15:29:41.963622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.395 [2024-07-15 15:29:41.963631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119e630 with addr=10.0.0.2, port=4420 00:24:32.395 [2024-07-15 15:29:41.963638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119e630 is same with the state(5) to be set 00:24:32.395 [2024-07-15 15:29:41.963846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.395 [2024-07-15 15:29:41.963855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1181540 with addr=10.0.0.2, port=4420 00:24:32.395 [2024-07-15 15:29:41.963862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1181540 is same with the state(5) to be set 00:24:32.395 [2024-07-15 15:29:41.965458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:32.395 [2024-07-15 15:29:41.965472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:32.395 [2024-07-15 15:29:41.965480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:32.395 [2024-07-15 15:29:41.965723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.395 [2024-07-15 15:29:41.965735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc75610 with addr=10.0.0.2, port=4420 00:24:32.395 [2024-07-15 15:29:41.965743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc75610 is same with the state(5) to be set 00:24:32.395 [2024-07-15 15:29:41.966043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.395 [2024-07-15 15:29:41.966054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1195690 with addr=10.0.0.2, port=4420 00:24:32.395 [2024-07-15 15:29:41.966061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1195690 is same with the state(5) to be set 00:24:32.395 [2024-07-15 15:29:41.966400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.395 [2024-07-15 15:29:41.966409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11cc1d0 with addr=10.0.0.2, port=4420 00:24:32.395 [2024-07-15 15:29:41.966416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11cc1d0 is same with the state(5) to be set 00:24:32.395 [2024-07-15 15:29:41.966429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133d290 (9): Bad file descriptor 00:24:32.395 [2024-07-15 15:29:41.966440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119e630 (9): Bad file descriptor 00:24:32.395 [2024-07-15 15:29:41.966454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1181540 (9): Bad file descriptor 00:24:32.395 [2024-07-15 15:29:41.966482] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:32.395 [2024-07-15 15:29:41.966497] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:32.395 [2024-07-15 15:29:41.966509] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:32.395 [2024-07-15 15:29:41.966520] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:32.395 [2024-07-15 15:29:41.966580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:32.395 [2024-07-15 15:29:41.966975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.395 [2024-07-15 15:29:41.966987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd66090 with addr=10.0.0.2, port=4420 00:24:32.395 [2024-07-15 15:29:41.966995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66090 is same with the state(5) to be set 00:24:32.395 [2024-07-15 15:29:41.967214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.395 [2024-07-15 15:29:41.967223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1190180 with addr=10.0.0.2, port=4420 00:24:32.395 [2024-07-15 15:29:41.967230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1190180 is same with the state(5) to be set 00:24:32.395 [2024-07-15 15:29:41.967425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.395 [2024-07-15 15:29:41.967434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11c3c90 with addr=10.0.0.2, port=4420 00:24:32.395 [2024-07-15 15:29:41.967441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c3c90 is same with the state(5) to be set 00:24:32.395 [2024-07-15 15:29:41.967449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc75610 (9): Bad file descriptor 00:24:32.395 [2024-07-15 15:29:41.967458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1195690 (9): Bad file descriptor 00:24:32.395 [2024-07-15 15:29:41.967467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cc1d0 (9): Bad file descriptor 00:24:32.395 [2024-07-15 15:29:41.967475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:32.395 [2024-07-15 15:29:41.967481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:32.395 [2024-07-15 15:29:41.967489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:32.395 [2024-07-15 15:29:41.967499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:32.395 [2024-07-15 15:29:41.967505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:32.395 [2024-07-15 15:29:41.967511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:32.395 [2024-07-15 15:29:41.967522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:32.395 [2024-07-15 15:29:41.967528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:32.395 [2024-07-15 15:29:41.967535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:32.395 [2024-07-15 15:29:41.967609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.395 [2024-07-15 15:29:41.967617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.395 [2024-07-15 15:29:41.967623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.395 [2024-07-15 15:29:41.967823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.395 [2024-07-15 15:29:41.967833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11803f0 with addr=10.0.0.2, port=4420 00:24:32.395 [2024-07-15 15:29:41.967841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11803f0 is same with the state(5) to be set 00:24:32.395 [2024-07-15 15:29:41.967849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd66090 (9): Bad file descriptor 00:24:32.395 [2024-07-15 15:29:41.967858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1190180 (9): Bad file descriptor 00:24:32.395 [2024-07-15 15:29:41.967867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c3c90 (9): Bad file descriptor 00:24:32.395 [2024-07-15 15:29:41.967874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:32.395 [2024-07-15 15:29:41.967881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:32.395 [2024-07-15 15:29:41.967891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:32.395 [2024-07-15 15:29:41.967900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:32.395 [2024-07-15 15:29:41.967906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:32.395 [2024-07-15 15:29:41.967913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:32.395 [2024-07-15 15:29:41.967922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:32.395 [2024-07-15 15:29:41.967928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:32.395 [2024-07-15 15:29:41.967934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:32.395 [2024-07-15 15:29:41.967962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.395 [2024-07-15 15:29:41.967969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.395 [2024-07-15 15:29:41.967975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.395 [2024-07-15 15:29:41.967982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11803f0 (9): Bad file descriptor 00:24:32.395 [2024-07-15 15:29:41.967990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:32.395 [2024-07-15 15:29:41.967996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:32.395 [2024-07-15 15:29:41.968003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:32.395 [2024-07-15 15:29:41.968012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:32.395 [2024-07-15 15:29:41.968018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:32.395 [2024-07-15 15:29:41.968025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:32.395 [2024-07-15 15:29:41.968034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:32.395 [2024-07-15 15:29:41.968040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:32.395 [2024-07-15 15:29:41.968047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:32.395 [2024-07-15 15:29:41.968380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.395 [2024-07-15 15:29:41.968390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.396 [2024-07-15 15:29:41.968398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.396 [2024-07-15 15:29:41.968405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:32.396 [2024-07-15 15:29:41.968412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:32.396 [2024-07-15 15:29:41.968418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:32.396 [2024-07-15 15:29:41.968446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:32.655 15:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:32.655 15:29:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:24:33.597 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 783869 00:24:33.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (783869) - No such process 00:24:33.597 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:24:33.597 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:24:33.597 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:33.597 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:33.597 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:33.597 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:33.597 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:33.597 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:24:33.597 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:33.597 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:24:33.597 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:33.597 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:33.597 rmmod nvme_tcp 00:24:33.857 rmmod nvme_fabrics 00:24:33.857 rmmod nvme_keyring 00:24:33.857 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:33.857 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:24:33.857 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:24:33.857 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:33.857 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:33.857 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:33.857 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:33.857 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:33.857 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:33.857 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.857 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.857 15:29:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.765 15:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:35.765 00:24:35.765 real 0m7.678s 00:24:35.765 user 0m18.415s 00:24:35.765 sys 0m1.168s 00:24:35.765 15:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:35.765 15:29:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:35.765 ************************************ 00:24:35.765 END TEST nvmf_shutdown_tc3 00:24:35.765 ************************************ 00:24:35.765 15:29:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:24:35.765 15:29:45 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:35.765 00:24:35.765 real 0m33.382s 00:24:35.765 user 1m17.387s 00:24:35.765 sys 0m9.618s 00:24:35.765 15:29:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:35.765 15:29:45 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:35.765 ************************************ 00:24:35.765 END TEST nvmf_shutdown 00:24:35.765 ************************************ 00:24:36.025 15:29:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:36.025 15:29:45 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:36.025 15:29:45 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:36.025 15:29:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:36.025 15:29:45 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:36.025 15:29:45 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:36.025 15:29:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:36.025 15:29:45 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:36.025 15:29:45 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:36.025 15:29:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:36.025 15:29:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:36.025 15:29:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:36.025 ************************************ 00:24:36.025 START TEST nvmf_multicontroller 00:24:36.025 ************************************ 00:24:36.025 15:29:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:36.025 * Looking for test storage... 00:24:36.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:36.025 15:29:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.025 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:36.025 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.025 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.025 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.025 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.025 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.025 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.025 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.025 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.025 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.025 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.025 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:36.025 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:36.025 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:36.026 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:36.286 15:29:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:24:36.286 15:29:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:44.416 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:44.416 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:44.416 Found net devices under 0000:31:00.0: cvl_0_0 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:44.416 Found net devices under 0000:31:00.1: cvl_0_1 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:44.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:24:44.416 00:24:44.416 --- 10.0.0.2 ping statistics --- 00:24:44.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.416 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:24:44.416 00:24:44.416 --- 10.0.0.1 ping statistics --- 00:24:44.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.416 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=789098 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 789098 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 789098 ']' 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.416 15:29:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.417 15:29:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.417 15:29:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.417 15:29:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.417 15:29:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:44.417 [2024-07-15 15:29:53.540410] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:44.417 [2024-07-15 15:29:53.540477] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.417 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.417 [2024-07-15 15:29:53.616072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:44.417 [2024-07-15 15:29:53.688788] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.417 [2024-07-15 15:29:53.688829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.417 [2024-07-15 15:29:53.688837] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.417 [2024-07-15 15:29:53.688843] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.417 [2024-07-15 15:29:53.688848] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.417 [2024-07-15 15:29:53.688950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.417 [2024-07-15 15:29:53.689232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.417 [2024-07-15 15:29:53.689231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.986 [2024-07-15 15:29:54.368549] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.986 Malloc0 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.986 15:29:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.987 [2024-07-15 15:29:54.440321] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.987 [2024-07-15 15:29:54.452272] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.987 Malloc1 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=789176 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 789176 /var/tmp/bdevperf.sock 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 789176 ']' 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.987 15:29:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:45.925 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.925 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:24:45.925 15:29:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:45.925 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.925 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.196 NVMe0n1 00:24:46.196 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.196 15:29:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:46.196 15:29:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:46.196 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.196 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.196 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.196 1 00:24:46.196 15:29:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:46.196 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:46.196 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:46.196 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:46.196 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:46.196 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:46.196 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:46.196 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:46.196 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.196 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.196 request: 00:24:46.196 { 00:24:46.196 "name": "NVMe0", 00:24:46.196 "trtype": "tcp", 00:24:46.196 "traddr": "10.0.0.2", 00:24:46.196 "adrfam": "ipv4", 00:24:46.196 "trsvcid": "4420", 00:24:46.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.196 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:46.196 "hostaddr": "10.0.0.2", 00:24:46.196 "hostsvcid": "60000", 00:24:46.196 "prchk_reftag": false, 00:24:46.196 "prchk_guard": false, 00:24:46.196 "hdgst": false, 00:24:46.196 "ddgst": false, 00:24:46.196 "method": "bdev_nvme_attach_controller", 00:24:46.196 "req_id": 1 00:24:46.196 } 00:24:46.196 Got JSON-RPC error response 00:24:46.196 response: 00:24:46.196 { 00:24:46.197 "code": -114, 00:24:46.197 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:46.197 } 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.197 request: 00:24:46.197 { 00:24:46.197 "name": "NVMe0", 00:24:46.197 "trtype": "tcp", 00:24:46.197 "traddr": "10.0.0.2", 00:24:46.197 "adrfam": "ipv4", 00:24:46.197 "trsvcid": "4420", 00:24:46.197 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:46.197 "hostaddr": "10.0.0.2", 00:24:46.197 "hostsvcid": "60000", 00:24:46.197 "prchk_reftag": false, 00:24:46.197 "prchk_guard": false, 00:24:46.197 "hdgst": false, 00:24:46.197 "ddgst": false, 00:24:46.197 "method": "bdev_nvme_attach_controller", 00:24:46.197 "req_id": 1 00:24:46.197 } 00:24:46.197 Got JSON-RPC error response 00:24:46.197 response: 00:24:46.197 { 00:24:46.197 "code": -114, 00:24:46.197 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:46.197 } 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.197 request: 00:24:46.197 { 00:24:46.197 "name": "NVMe0", 00:24:46.197 "trtype": "tcp", 00:24:46.197 "traddr": "10.0.0.2", 00:24:46.197 "adrfam": "ipv4", 00:24:46.197 "trsvcid": "4420", 00:24:46.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.197 "hostaddr": "10.0.0.2", 00:24:46.197 "hostsvcid": "60000", 00:24:46.197 "prchk_reftag": false, 00:24:46.197 "prchk_guard": false, 00:24:46.197 "hdgst": false, 00:24:46.197 "ddgst": false, 00:24:46.197 "multipath": "disable", 00:24:46.197 "method": "bdev_nvme_attach_controller", 00:24:46.197 "req_id": 1 00:24:46.197 } 00:24:46.197 Got JSON-RPC error response 00:24:46.197 response: 00:24:46.197 { 00:24:46.197 "code": -114, 00:24:46.197 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:46.197 } 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.197 request: 00:24:46.197 { 00:24:46.197 "name": "NVMe0", 00:24:46.197 "trtype": "tcp", 00:24:46.197 "traddr": "10.0.0.2", 00:24:46.197 "adrfam": "ipv4", 00:24:46.197 "trsvcid": "4420", 00:24:46.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.197 "hostaddr": "10.0.0.2", 00:24:46.197 "hostsvcid": "60000", 00:24:46.197 "prchk_reftag": false, 00:24:46.197 "prchk_guard": false, 00:24:46.197 "hdgst": false, 00:24:46.197 "ddgst": false, 00:24:46.197 "multipath": "failover", 00:24:46.197 "method": "bdev_nvme_attach_controller", 00:24:46.197 "req_id": 1 00:24:46.197 } 00:24:46.197 Got JSON-RPC error response 00:24:46.197 response: 00:24:46.197 { 00:24:46.197 "code": -114, 00:24:46.197 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:46.197 } 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.197 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.197 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.455 00:24:46.455 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.455 15:29:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:46.455 15:29:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:46.455 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.455 15:29:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:46.455 15:29:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.455 15:29:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:46.455 15:29:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:47.832 0 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 789176 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 789176 ']' 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 789176 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 789176 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 789176' 00:24:47.832 killing process with pid 789176 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 789176 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 789176 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:47.832 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:24:47.833 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:47.833 [2024-07-15 15:29:54.572476] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:47.833 [2024-07-15 15:29:54.572534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789176 ] 00:24:47.833 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.833 [2024-07-15 15:29:54.637366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.833 [2024-07-15 15:29:54.702001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.833 [2024-07-15 15:29:55.980847] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name e197cf03-bfba-4e64-9031-c3eea7225903 already exists 00:24:47.833 [2024-07-15 15:29:55.980878] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:e197cf03-bfba-4e64-9031-c3eea7225903 alias for bdev NVMe1n1 00:24:47.833 [2024-07-15 15:29:55.980892] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:47.833 Running I/O for 1 seconds... 00:24:47.833 00:24:47.833 Latency(us) 00:24:47.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.833 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:47.833 NVMe0n1 : 1.00 29271.09 114.34 0.00 0.00 4362.67 2102.61 15400.96 00:24:47.833 =================================================================================================================== 00:24:47.833 Total : 29271.09 114.34 0.00 0.00 4362.67 2102.61 15400.96 00:24:47.833 Received shutdown signal, test time was about 1.000000 seconds 00:24:47.833 00:24:47.833 Latency(us) 00:24:47.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.833 =================================================================================================================== 00:24:47.833 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:47.833 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:47.833 rmmod nvme_tcp 00:24:47.833 rmmod nvme_fabrics 00:24:47.833 rmmod nvme_keyring 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 789098 ']' 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 789098 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 789098 ']' 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 789098 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:47.833 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 789098 00:24:48.092 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:48.092 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:48.092 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 789098' 00:24:48.092 killing process with pid 789098 00:24:48.092 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 789098 00:24:48.092 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 789098 00:24:48.092 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:48.093 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:48.093 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:48.093 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:48.093 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:48.093 15:29:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.093 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:48.093 15:29:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.687 15:29:59 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:50.687 00:24:50.687 real 0m14.202s 00:24:50.687 user 0m17.057s 00:24:50.687 sys 0m6.537s 00:24:50.687 15:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:50.687 15:29:59 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:50.687 ************************************ 00:24:50.687 END TEST nvmf_multicontroller 00:24:50.687 ************************************ 00:24:50.687 15:29:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:50.687 15:29:59 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:50.687 15:29:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:50.687 15:29:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:50.687 15:29:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:50.687 ************************************ 00:24:50.687 START TEST nvmf_aer 00:24:50.687 ************************************ 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:50.687 * Looking for test storage... 00:24:50.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:50.687 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:50.688 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.688 15:29:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.688 15:29:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.688 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:50.688 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:50.688 15:29:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:50.688 15:29:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:58.811 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:58.811 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:58.811 Found net devices under 0000:31:00.0: cvl_0_0 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:58.811 Found net devices under 0000:31:00.1: cvl_0_1 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.811 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:58.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:24:58.811 00:24:58.812 --- 10.0.0.2 ping statistics --- 00:24:58.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.812 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:24:58.812 00:24:58.812 --- 10.0.0.1 ping statistics --- 00:24:58.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.812 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=794410 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 794410 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 794410 ']' 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:58.812 15:30:07 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:58.812 [2024-07-15 15:30:07.580791] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:24:58.812 [2024-07-15 15:30:07.580857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.812 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.812 [2024-07-15 15:30:07.656796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.812 [2024-07-15 15:30:07.731674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.812 [2024-07-15 15:30:07.731713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.812 [2024-07-15 15:30:07.731721] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.812 [2024-07-15 15:30:07.731728] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.812 [2024-07-15 15:30:07.731733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.812 [2024-07-15 15:30:07.731842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.812 [2024-07-15 15:30:07.731977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.812 [2024-07-15 15:30:07.732075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.812 [2024-07-15 15:30:07.732077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.812 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:58.812 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:24:58.812 15:30:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:58.812 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:58.812 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:58.812 15:30:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.812 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:58.812 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.812 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:58.812 [2024-07-15 15:30:08.412480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.812 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.812 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:58.812 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.812 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.072 Malloc0 00:24:59.072 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.072 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:59.072 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.072 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.072 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.072 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:59.072 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.072 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.072 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.072 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.072 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.072 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.072 [2024-07-15 15:30:08.471821] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.072 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.072 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:59.072 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.072 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.072 [ 00:24:59.072 { 00:24:59.072 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:59.072 "subtype": "Discovery", 00:24:59.072 "listen_addresses": [], 00:24:59.072 "allow_any_host": true, 00:24:59.072 "hosts": [] 00:24:59.072 }, 00:24:59.072 { 00:24:59.072 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.072 "subtype": "NVMe", 00:24:59.072 "listen_addresses": [ 00:24:59.072 { 00:24:59.072 "trtype": "TCP", 00:24:59.072 "adrfam": "IPv4", 00:24:59.072 "traddr": "10.0.0.2", 00:24:59.072 "trsvcid": "4420" 00:24:59.072 } 00:24:59.072 ], 00:24:59.072 "allow_any_host": true, 00:24:59.072 "hosts": [], 00:24:59.072 "serial_number": "SPDK00000000000001", 00:24:59.072 "model_number": "SPDK bdev Controller", 00:24:59.072 "max_namespaces": 2, 00:24:59.072 "min_cntlid": 1, 00:24:59.072 "max_cntlid": 65519, 00:24:59.072 "namespaces": [ 00:24:59.072 { 00:24:59.072 "nsid": 1, 00:24:59.072 "bdev_name": "Malloc0", 00:24:59.072 "name": "Malloc0", 00:24:59.072 "nguid": "E13A474528094B1BA5B98C43CA304BFF", 00:24:59.072 "uuid": "e13a4745-2809-4b1b-a5b9-8c43ca304bff" 00:24:59.073 } 00:24:59.073 ] 00:24:59.073 } 00:24:59.073 ] 00:24:59.073 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.073 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:59.073 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:59.073 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=794543 00:24:59.073 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:59.073 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:59.073 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:59.073 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:59.073 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:59.073 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:59.073 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:59.073 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.073 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:59.073 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:59.073 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:59.073 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.333 Malloc1 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.333 Asynchronous Event Request test 00:24:59.333 Attaching to 10.0.0.2 00:24:59.333 Attached to 10.0.0.2 00:24:59.333 Registering asynchronous event callbacks... 00:24:59.333 Starting namespace attribute notice tests for all controllers... 00:24:59.333 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:59.333 aer_cb - Changed Namespace 00:24:59.333 Cleaning up... 00:24:59.333 [ 00:24:59.333 { 00:24:59.333 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:59.333 "subtype": "Discovery", 00:24:59.333 "listen_addresses": [], 00:24:59.333 "allow_any_host": true, 00:24:59.333 "hosts": [] 00:24:59.333 }, 00:24:59.333 { 00:24:59.333 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.333 "subtype": "NVMe", 00:24:59.333 "listen_addresses": [ 00:24:59.333 { 00:24:59.333 "trtype": "TCP", 00:24:59.333 "adrfam": "IPv4", 00:24:59.333 "traddr": "10.0.0.2", 00:24:59.333 "trsvcid": "4420" 00:24:59.333 } 00:24:59.333 ], 00:24:59.333 "allow_any_host": true, 00:24:59.333 "hosts": [], 00:24:59.333 "serial_number": "SPDK00000000000001", 00:24:59.333 "model_number": "SPDK bdev Controller", 00:24:59.333 "max_namespaces": 2, 00:24:59.333 "min_cntlid": 1, 00:24:59.333 "max_cntlid": 65519, 00:24:59.333 "namespaces": [ 00:24:59.333 { 00:24:59.333 "nsid": 1, 00:24:59.333 "bdev_name": "Malloc0", 00:24:59.333 "name": "Malloc0", 00:24:59.333 "nguid": "E13A474528094B1BA5B98C43CA304BFF", 00:24:59.333 "uuid": "e13a4745-2809-4b1b-a5b9-8c43ca304bff" 00:24:59.333 }, 00:24:59.333 { 00:24:59.333 "nsid": 2, 00:24:59.333 "bdev_name": "Malloc1", 00:24:59.333 "name": "Malloc1", 00:24:59.333 "nguid": "C98BBAF2B5184FC5B58568F45EA1EB38", 00:24:59.333 "uuid": "c98bbaf2-b518-4fc5-b585-68f45ea1eb38" 00:24:59.333 } 00:24:59.333 ] 00:24:59.333 } 00:24:59.333 ] 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 794543 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:59.333 rmmod nvme_tcp 00:24:59.333 rmmod nvme_fabrics 00:24:59.333 rmmod nvme_keyring 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 794410 ']' 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 794410 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 794410 ']' 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 794410 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 794410 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 794410' 00:24:59.333 killing process with pid 794410 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 794410 00:24:59.333 15:30:08 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 794410 00:24:59.594 15:30:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:59.594 15:30:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:59.594 15:30:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:59.594 15:30:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:59.594 15:30:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:59.594 15:30:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.594 15:30:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.594 15:30:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.135 15:30:11 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:02.135 00:25:02.135 real 0m11.374s 00:25:02.136 user 0m7.639s 00:25:02.136 sys 0m6.025s 00:25:02.136 15:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:02.136 15:30:11 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:02.136 ************************************ 00:25:02.136 END TEST nvmf_aer 00:25:02.136 ************************************ 00:25:02.136 15:30:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:02.136 15:30:11 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:02.136 15:30:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:02.136 15:30:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.136 15:30:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:02.136 ************************************ 00:25:02.136 START TEST nvmf_async_init 00:25:02.136 ************************************ 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:02.136 * Looking for test storage... 00:25:02.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d9c30c958c004f378a00c9fdd8e1900d 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:25:02.136 15:30:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:10.276 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:10.276 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:10.276 Found net devices under 0000:31:00.0: cvl_0_0 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:10.276 Found net devices under 0000:31:00.1: cvl_0_1 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.276 15:30:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:10.276 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.276 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.276 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.276 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:10.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.747 ms 00:25:10.276 00:25:10.276 --- 10.0.0.2 ping statistics --- 00:25:10.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.276 rtt min/avg/max/mdev = 0.747/0.747/0.747/0.000 ms 00:25:10.276 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:25:10.276 00:25:10.276 --- 10.0.0.1 ping statistics --- 00:25:10.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.276 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:25:10.276 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.276 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:25:10.276 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:10.276 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=799549 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 799549 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 799549 ']' 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:10.277 15:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.277 [2024-07-15 15:30:19.238159] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:25:10.277 [2024-07-15 15:30:19.238228] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.277 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.277 [2024-07-15 15:30:19.313929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.277 [2024-07-15 15:30:19.386671] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.277 [2024-07-15 15:30:19.386711] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.277 [2024-07-15 15:30:19.386718] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.277 [2024-07-15 15:30:19.386724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.277 [2024-07-15 15:30:19.386730] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.277 [2024-07-15 15:30:19.386755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.538 15:30:19 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.538 [2024-07-15 15:30:20.049915] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.538 null0 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d9c30c958c004f378a00c9fdd8e1900d 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.538 [2024-07-15 15:30:20.110167] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.538 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.798 nvme0n1 00:25:10.798 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.798 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:10.798 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.798 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.798 [ 00:25:10.798 { 00:25:10.798 "name": "nvme0n1", 00:25:10.798 "aliases": [ 00:25:10.798 "d9c30c95-8c00-4f37-8a00-c9fdd8e1900d" 00:25:10.798 ], 00:25:10.798 "product_name": "NVMe disk", 00:25:10.798 "block_size": 512, 00:25:10.798 "num_blocks": 2097152, 00:25:10.798 "uuid": "d9c30c95-8c00-4f37-8a00-c9fdd8e1900d", 00:25:10.798 "assigned_rate_limits": { 00:25:10.798 "rw_ios_per_sec": 0, 00:25:10.798 "rw_mbytes_per_sec": 0, 00:25:10.798 "r_mbytes_per_sec": 0, 00:25:10.798 "w_mbytes_per_sec": 0 00:25:10.798 }, 00:25:10.798 "claimed": false, 00:25:10.798 "zoned": false, 00:25:10.798 "supported_io_types": { 00:25:10.798 "read": true, 00:25:10.798 "write": true, 00:25:10.798 "unmap": false, 00:25:10.798 "flush": true, 00:25:10.798 "reset": true, 00:25:10.798 "nvme_admin": true, 00:25:10.798 "nvme_io": true, 00:25:10.798 "nvme_io_md": false, 00:25:10.798 "write_zeroes": true, 00:25:10.798 "zcopy": false, 00:25:10.798 "get_zone_info": false, 00:25:10.798 "zone_management": false, 00:25:10.798 "zone_append": false, 00:25:10.798 "compare": true, 00:25:10.798 "compare_and_write": true, 00:25:10.798 "abort": true, 00:25:10.798 "seek_hole": false, 00:25:10.798 "seek_data": false, 00:25:10.798 "copy": true, 00:25:10.798 "nvme_iov_md": false 00:25:10.798 }, 00:25:10.798 "memory_domains": [ 00:25:10.798 { 00:25:10.798 "dma_device_id": "system", 00:25:10.798 "dma_device_type": 1 00:25:10.798 } 00:25:10.798 ], 00:25:10.798 "driver_specific": { 00:25:10.798 "nvme": [ 00:25:10.798 { 00:25:10.798 "trid": { 00:25:10.798 "trtype": "TCP", 00:25:10.798 "adrfam": "IPv4", 00:25:10.798 "traddr": "10.0.0.2", 00:25:10.798 "trsvcid": "4420", 00:25:10.798 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:10.798 }, 00:25:10.798 "ctrlr_data": { 00:25:10.798 "cntlid": 1, 00:25:10.798 "vendor_id": "0x8086", 00:25:10.798 "model_number": "SPDK bdev Controller", 00:25:10.798 "serial_number": "00000000000000000000", 00:25:10.798 "firmware_revision": "24.09", 00:25:10.798 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:10.798 "oacs": { 00:25:10.798 "security": 0, 00:25:10.798 "format": 0, 00:25:10.798 "firmware": 0, 00:25:10.798 "ns_manage": 0 00:25:10.798 }, 00:25:10.798 "multi_ctrlr": true, 00:25:10.798 "ana_reporting": false 00:25:10.798 }, 00:25:10.798 "vs": { 00:25:10.798 "nvme_version": "1.3" 00:25:10.798 }, 00:25:10.798 "ns_data": { 00:25:10.798 "id": 1, 00:25:10.798 "can_share": true 00:25:10.798 } 00:25:10.798 } 00:25:10.798 ], 00:25:10.798 "mp_policy": "active_passive" 00:25:10.798 } 00:25:10.798 } 00:25:10.798 ] 00:25:10.798 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.798 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:10.798 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.798 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.798 [2024-07-15 15:30:20.386734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:10.798 [2024-07-15 15:30:20.386796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b7cc0 (9): Bad file descriptor 00:25:11.058 [2024-07-15 15:30:20.518979] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.058 [ 00:25:11.058 { 00:25:11.058 "name": "nvme0n1", 00:25:11.058 "aliases": [ 00:25:11.058 "d9c30c95-8c00-4f37-8a00-c9fdd8e1900d" 00:25:11.058 ], 00:25:11.058 "product_name": "NVMe disk", 00:25:11.058 "block_size": 512, 00:25:11.058 "num_blocks": 2097152, 00:25:11.058 "uuid": "d9c30c95-8c00-4f37-8a00-c9fdd8e1900d", 00:25:11.058 "assigned_rate_limits": { 00:25:11.058 "rw_ios_per_sec": 0, 00:25:11.058 "rw_mbytes_per_sec": 0, 00:25:11.058 "r_mbytes_per_sec": 0, 00:25:11.058 "w_mbytes_per_sec": 0 00:25:11.058 }, 00:25:11.058 "claimed": false, 00:25:11.058 "zoned": false, 00:25:11.058 "supported_io_types": { 00:25:11.058 "read": true, 00:25:11.058 "write": true, 00:25:11.058 "unmap": false, 00:25:11.058 "flush": true, 00:25:11.058 "reset": true, 00:25:11.058 "nvme_admin": true, 00:25:11.058 "nvme_io": true, 00:25:11.058 "nvme_io_md": false, 00:25:11.058 "write_zeroes": true, 00:25:11.058 "zcopy": false, 00:25:11.058 "get_zone_info": false, 00:25:11.058 "zone_management": false, 00:25:11.058 "zone_append": false, 00:25:11.058 "compare": true, 00:25:11.058 "compare_and_write": true, 00:25:11.058 "abort": true, 00:25:11.058 "seek_hole": false, 00:25:11.058 "seek_data": false, 00:25:11.058 "copy": true, 00:25:11.058 "nvme_iov_md": false 00:25:11.058 }, 00:25:11.058 "memory_domains": [ 00:25:11.058 { 00:25:11.058 "dma_device_id": "system", 00:25:11.058 "dma_device_type": 1 00:25:11.058 } 00:25:11.058 ], 00:25:11.058 "driver_specific": { 00:25:11.058 "nvme": [ 00:25:11.058 { 00:25:11.058 "trid": { 00:25:11.058 "trtype": "TCP", 00:25:11.058 "adrfam": "IPv4", 00:25:11.058 "traddr": "10.0.0.2", 00:25:11.058 "trsvcid": "4420", 00:25:11.058 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:11.058 }, 00:25:11.058 "ctrlr_data": { 00:25:11.058 "cntlid": 2, 00:25:11.058 "vendor_id": "0x8086", 00:25:11.058 "model_number": "SPDK bdev Controller", 00:25:11.058 "serial_number": "00000000000000000000", 00:25:11.058 "firmware_revision": "24.09", 00:25:11.058 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:11.058 "oacs": { 00:25:11.058 "security": 0, 00:25:11.058 "format": 0, 00:25:11.058 "firmware": 0, 00:25:11.058 "ns_manage": 0 00:25:11.058 }, 00:25:11.058 "multi_ctrlr": true, 00:25:11.058 "ana_reporting": false 00:25:11.058 }, 00:25:11.058 "vs": { 00:25:11.058 "nvme_version": "1.3" 00:25:11.058 }, 00:25:11.058 "ns_data": { 00:25:11.058 "id": 1, 00:25:11.058 "can_share": true 00:25:11.058 } 00:25:11.058 } 00:25:11.058 ], 00:25:11.058 "mp_policy": "active_passive" 00:25:11.058 } 00:25:11.058 } 00:25:11.058 ] 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.3iEklZsXZu 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.3iEklZsXZu 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.058 [2024-07-15 15:30:20.587370] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:11.058 [2024-07-15 15:30:20.587471] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3iEklZsXZu 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.058 [2024-07-15 15:30:20.599393] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3iEklZsXZu 00:25:11.058 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.059 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.059 [2024-07-15 15:30:20.611443] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:11.059 [2024-07-15 15:30:20.611475] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:11.059 nvme0n1 00:25:11.059 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.059 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:11.059 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.059 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.318 [ 00:25:11.318 { 00:25:11.318 "name": "nvme0n1", 00:25:11.318 "aliases": [ 00:25:11.318 "d9c30c95-8c00-4f37-8a00-c9fdd8e1900d" 00:25:11.318 ], 00:25:11.318 "product_name": "NVMe disk", 00:25:11.318 "block_size": 512, 00:25:11.318 "num_blocks": 2097152, 00:25:11.318 "uuid": "d9c30c95-8c00-4f37-8a00-c9fdd8e1900d", 00:25:11.318 "assigned_rate_limits": { 00:25:11.318 "rw_ios_per_sec": 0, 00:25:11.318 "rw_mbytes_per_sec": 0, 00:25:11.318 "r_mbytes_per_sec": 0, 00:25:11.318 "w_mbytes_per_sec": 0 00:25:11.318 }, 00:25:11.318 "claimed": false, 00:25:11.318 "zoned": false, 00:25:11.318 "supported_io_types": { 00:25:11.318 "read": true, 00:25:11.318 "write": true, 00:25:11.318 "unmap": false, 00:25:11.318 "flush": true, 00:25:11.318 "reset": true, 00:25:11.318 "nvme_admin": true, 00:25:11.318 "nvme_io": true, 00:25:11.318 "nvme_io_md": false, 00:25:11.318 "write_zeroes": true, 00:25:11.318 "zcopy": false, 00:25:11.318 "get_zone_info": false, 00:25:11.318 "zone_management": false, 00:25:11.318 "zone_append": false, 00:25:11.318 "compare": true, 00:25:11.318 "compare_and_write": true, 00:25:11.318 "abort": true, 00:25:11.318 "seek_hole": false, 00:25:11.318 "seek_data": false, 00:25:11.318 "copy": true, 00:25:11.318 "nvme_iov_md": false 00:25:11.318 }, 00:25:11.318 "memory_domains": [ 00:25:11.318 { 00:25:11.318 "dma_device_id": "system", 00:25:11.318 "dma_device_type": 1 00:25:11.318 } 00:25:11.318 ], 00:25:11.318 "driver_specific": { 00:25:11.318 "nvme": [ 00:25:11.318 { 00:25:11.318 "trid": { 00:25:11.318 "trtype": "TCP", 00:25:11.318 "adrfam": "IPv4", 00:25:11.318 "traddr": "10.0.0.2", 00:25:11.318 "trsvcid": "4421", 00:25:11.318 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:11.318 }, 00:25:11.318 "ctrlr_data": { 00:25:11.318 "cntlid": 3, 00:25:11.318 "vendor_id": "0x8086", 00:25:11.318 "model_number": "SPDK bdev Controller", 00:25:11.318 "serial_number": "00000000000000000000", 00:25:11.318 "firmware_revision": "24.09", 00:25:11.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:11.318 "oacs": { 00:25:11.318 "security": 0, 00:25:11.318 "format": 0, 00:25:11.318 "firmware": 0, 00:25:11.318 "ns_manage": 0 00:25:11.318 }, 00:25:11.318 "multi_ctrlr": true, 00:25:11.318 "ana_reporting": false 00:25:11.318 }, 00:25:11.318 "vs": { 00:25:11.318 "nvme_version": "1.3" 00:25:11.318 }, 00:25:11.318 "ns_data": { 00:25:11.318 "id": 1, 00:25:11.318 "can_share": true 00:25:11.318 } 00:25:11.318 } 00:25:11.318 ], 00:25:11.318 "mp_policy": "active_passive" 00:25:11.318 } 00:25:11.318 } 00:25:11.318 ] 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.3iEklZsXZu 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:11.319 rmmod nvme_tcp 00:25:11.319 rmmod nvme_fabrics 00:25:11.319 rmmod nvme_keyring 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 799549 ']' 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 799549 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 799549 ']' 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 799549 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 799549 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 799549' 00:25:11.319 killing process with pid 799549 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 799549 00:25:11.319 [2024-07-15 15:30:20.846087] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:11.319 [2024-07-15 15:30:20.846114] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:11.319 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 799549 00:25:11.578 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:11.578 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:11.578 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:11.578 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:11.578 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:11.578 15:30:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.578 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.578 15:30:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.483 15:30:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:13.483 00:25:13.483 real 0m11.799s 00:25:13.483 user 0m4.175s 00:25:13.483 sys 0m6.088s 00:25:13.483 15:30:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:13.483 15:30:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:13.483 ************************************ 00:25:13.483 END TEST nvmf_async_init 00:25:13.483 ************************************ 00:25:13.483 15:30:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:13.483 15:30:23 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:13.483 15:30:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:13.483 15:30:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:13.483 15:30:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.742 ************************************ 00:25:13.742 START TEST dma 00:25:13.742 ************************************ 00:25:13.742 15:30:23 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:13.742 * Looking for test storage... 00:25:13.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:13.742 15:30:23 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.742 15:30:23 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.742 15:30:23 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.742 15:30:23 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.742 15:30:23 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.742 15:30:23 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.742 15:30:23 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.742 15:30:23 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:25:13.742 15:30:23 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:13.742 15:30:23 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:13.742 15:30:23 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:13.742 15:30:23 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:25:13.742 00:25:13.742 real 0m0.129s 00:25:13.742 user 0m0.059s 00:25:13.742 sys 0m0.078s 00:25:13.742 15:30:23 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:13.742 15:30:23 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:25:13.742 ************************************ 00:25:13.742 END TEST dma 00:25:13.742 ************************************ 00:25:13.742 15:30:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:13.742 15:30:23 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:13.742 15:30:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:13.742 15:30:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:13.742 15:30:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.742 ************************************ 00:25:13.742 START TEST nvmf_identify 00:25:13.742 ************************************ 00:25:13.742 15:30:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:14.002 * Looking for test storage... 00:25:14.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:25:14.002 15:30:23 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:22.136 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:22.136 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:22.136 Found net devices under 0000:31:00.0: cvl_0_0 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:22.136 Found net devices under 0000:31:00.1: cvl_0_1 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:22.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:25:22.136 00:25:22.136 --- 10.0.0.2 ping statistics --- 00:25:22.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.136 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:25:22.136 00:25:22.136 --- 10.0.0.1 ping statistics --- 00:25:22.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.136 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:22.136 15:30:30 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=804484 00:25:22.137 15:30:30 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:22.137 15:30:30 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:22.137 15:30:30 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 804484 00:25:22.137 15:30:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 804484 ']' 00:25:22.137 15:30:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.137 15:30:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.137 15:30:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.137 15:30:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.137 15:30:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:22.137 [2024-07-15 15:30:31.049304] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:25:22.137 [2024-07-15 15:30:31.049368] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.137 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.137 [2024-07-15 15:30:31.125138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:22.137 [2024-07-15 15:30:31.200564] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.137 [2024-07-15 15:30:31.200603] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.137 [2024-07-15 15:30:31.200613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.137 [2024-07-15 15:30:31.200620] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.137 [2024-07-15 15:30:31.200626] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.137 [2024-07-15 15:30:31.200732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.137 [2024-07-15 15:30:31.200849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.137 [2024-07-15 15:30:31.201010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.137 [2024-07-15 15:30:31.201011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:22.412 [2024-07-15 15:30:31.842396] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:22.412 Malloc0 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:22.412 [2024-07-15 15:30:31.941854] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:22.412 [ 00:25:22.412 { 00:25:22.412 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:22.412 "subtype": "Discovery", 00:25:22.412 "listen_addresses": [ 00:25:22.412 { 00:25:22.412 "trtype": "TCP", 00:25:22.412 "adrfam": "IPv4", 00:25:22.412 "traddr": "10.0.0.2", 00:25:22.412 "trsvcid": "4420" 00:25:22.412 } 00:25:22.412 ], 00:25:22.412 "allow_any_host": true, 00:25:22.412 "hosts": [] 00:25:22.412 }, 00:25:22.412 { 00:25:22.412 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:22.412 "subtype": "NVMe", 00:25:22.412 "listen_addresses": [ 00:25:22.412 { 00:25:22.412 "trtype": "TCP", 00:25:22.412 "adrfam": "IPv4", 00:25:22.412 "traddr": "10.0.0.2", 00:25:22.412 "trsvcid": "4420" 00:25:22.412 } 00:25:22.412 ], 00:25:22.412 "allow_any_host": true, 00:25:22.412 "hosts": [], 00:25:22.412 "serial_number": "SPDK00000000000001", 00:25:22.412 "model_number": "SPDK bdev Controller", 00:25:22.412 "max_namespaces": 32, 00:25:22.412 "min_cntlid": 1, 00:25:22.412 "max_cntlid": 65519, 00:25:22.412 "namespaces": [ 00:25:22.412 { 00:25:22.412 "nsid": 1, 00:25:22.412 "bdev_name": "Malloc0", 00:25:22.412 "name": "Malloc0", 00:25:22.412 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:22.412 "eui64": "ABCDEF0123456789", 00:25:22.412 "uuid": "d08fab12-b70f-4878-ad9b-a47873daa84e" 00:25:22.412 } 00:25:22.412 ] 00:25:22.412 } 00:25:22.412 ] 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.412 15:30:31 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:22.412 [2024-07-15 15:30:32.004032] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:25:22.412 [2024-07-15 15:30:32.004075] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid804626 ] 00:25:22.412 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.691 [2024-07-15 15:30:32.037515] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:22.691 [2024-07-15 15:30:32.037558] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:22.691 [2024-07-15 15:30:32.037564] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:22.691 [2024-07-15 15:30:32.037574] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:22.691 [2024-07-15 15:30:32.037580] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:22.691 [2024-07-15 15:30:32.040919] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:22.691 [2024-07-15 15:30:32.040950] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfd1ec0 0 00:25:22.691 [2024-07-15 15:30:32.041202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:22.691 [2024-07-15 15:30:32.041211] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:22.691 [2024-07-15 15:30:32.041215] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:22.691 [2024-07-15 15:30:32.041218] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:22.691 [2024-07-15 15:30:32.041249] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.041254] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.041258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1ec0) 00:25:22.691 [2024-07-15 15:30:32.041270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:22.691 [2024-07-15 15:30:32.041283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1054e40, cid 0, qid 0 00:25:22.691 [2024-07-15 15:30:32.048892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.691 [2024-07-15 15:30:32.048902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.691 [2024-07-15 15:30:32.048905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.048910] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1054e40) on tqpair=0xfd1ec0 00:25:22.691 [2024-07-15 15:30:32.048919] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:22.691 [2024-07-15 15:30:32.048926] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:22.691 [2024-07-15 15:30:32.048931] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:22.691 [2024-07-15 15:30:32.048946] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.048951] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.048954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1ec0) 00:25:22.691 [2024-07-15 15:30:32.048961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.691 [2024-07-15 15:30:32.048974] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1054e40, cid 0, qid 0 00:25:22.691 [2024-07-15 15:30:32.049194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.691 [2024-07-15 15:30:32.049201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.691 [2024-07-15 15:30:32.049204] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.049208] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1054e40) on tqpair=0xfd1ec0 00:25:22.691 [2024-07-15 15:30:32.049213] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:22.691 [2024-07-15 15:30:32.049220] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:22.691 [2024-07-15 15:30:32.049227] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.049230] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.049234] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1ec0) 00:25:22.691 [2024-07-15 15:30:32.049240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.691 [2024-07-15 15:30:32.049250] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1054e40, cid 0, qid 0 00:25:22.691 [2024-07-15 15:30:32.049442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.691 [2024-07-15 15:30:32.049448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.691 [2024-07-15 15:30:32.049452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.049456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1054e40) on tqpair=0xfd1ec0 00:25:22.691 [2024-07-15 15:30:32.049461] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:22.691 [2024-07-15 15:30:32.049468] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:22.691 [2024-07-15 15:30:32.049474] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.049478] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.049481] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1ec0) 00:25:22.691 [2024-07-15 15:30:32.049488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.691 [2024-07-15 15:30:32.049498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1054e40, cid 0, qid 0 00:25:22.691 [2024-07-15 15:30:32.049663] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.691 [2024-07-15 15:30:32.049669] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.691 [2024-07-15 15:30:32.049672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.049676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1054e40) on tqpair=0xfd1ec0 00:25:22.691 [2024-07-15 15:30:32.049681] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:22.691 [2024-07-15 15:30:32.049690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.049694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.049699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1ec0) 00:25:22.691 [2024-07-15 15:30:32.049706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.691 [2024-07-15 15:30:32.049716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1054e40, cid 0, qid 0 00:25:22.691 [2024-07-15 15:30:32.049898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.691 [2024-07-15 15:30:32.049905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.691 [2024-07-15 15:30:32.049908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.049912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1054e40) on tqpair=0xfd1ec0 00:25:22.691 [2024-07-15 15:30:32.049917] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:22.691 [2024-07-15 15:30:32.049922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:22.691 [2024-07-15 15:30:32.049929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:22.691 [2024-07-15 15:30:32.050034] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:22.691 [2024-07-15 15:30:32.050038] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:22.691 [2024-07-15 15:30:32.050046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.050050] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.050053] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1ec0) 00:25:22.691 [2024-07-15 15:30:32.050060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.691 [2024-07-15 15:30:32.050070] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1054e40, cid 0, qid 0 00:25:22.691 [2024-07-15 15:30:32.050288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.691 [2024-07-15 15:30:32.050295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.691 [2024-07-15 15:30:32.050298] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.050302] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1054e40) on tqpair=0xfd1ec0 00:25:22.691 [2024-07-15 15:30:32.050306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:22.691 [2024-07-15 15:30:32.050315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.050319] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.691 [2024-07-15 15:30:32.050322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1ec0) 00:25:22.691 [2024-07-15 15:30:32.050329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.691 [2024-07-15 15:30:32.050338] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1054e40, cid 0, qid 0 00:25:22.691 [2024-07-15 15:30:32.050541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.691 [2024-07-15 15:30:32.050547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.692 [2024-07-15 15:30:32.050551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.050555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1054e40) on tqpair=0xfd1ec0 00:25:22.692 [2024-07-15 15:30:32.050559] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:22.692 [2024-07-15 15:30:32.050565] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:22.692 [2024-07-15 15:30:32.050573] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:22.692 [2024-07-15 15:30:32.050580] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:22.692 [2024-07-15 15:30:32.050588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.050592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1ec0) 00:25:22.692 [2024-07-15 15:30:32.050598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.692 [2024-07-15 15:30:32.050608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1054e40, cid 0, qid 0 00:25:22.692 [2024-07-15 15:30:32.050829] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.692 [2024-07-15 15:30:32.050835] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.692 [2024-07-15 15:30:32.050838] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.050842] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd1ec0): datao=0, datal=4096, cccid=0 00:25:22.692 [2024-07-15 15:30:32.050847] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1054e40) on tqpair(0xfd1ec0): expected_datao=0, payload_size=4096 00:25:22.692 [2024-07-15 15:30:32.050851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.050892] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.050896] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.051066] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.692 [2024-07-15 15:30:32.051072] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.692 [2024-07-15 15:30:32.051076] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.051080] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1054e40) on tqpair=0xfd1ec0 00:25:22.692 [2024-07-15 15:30:32.051086] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:22.692 [2024-07-15 15:30:32.051093] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:22.692 [2024-07-15 15:30:32.051098] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:22.692 [2024-07-15 15:30:32.051103] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:22.692 [2024-07-15 15:30:32.051107] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:22.692 [2024-07-15 15:30:32.051112] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:22.692 [2024-07-15 15:30:32.051119] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:22.692 [2024-07-15 15:30:32.051126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.051130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.051133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1ec0) 00:25:22.692 [2024-07-15 15:30:32.051140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:22.692 [2024-07-15 15:30:32.051150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1054e40, cid 0, qid 0 00:25:22.692 [2024-07-15 15:30:32.051367] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.692 [2024-07-15 15:30:32.051375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.692 [2024-07-15 15:30:32.051379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.051383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1054e40) on tqpair=0xfd1ec0 00:25:22.692 [2024-07-15 15:30:32.051390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.051393] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.051397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfd1ec0) 00:25:22.692 [2024-07-15 15:30:32.051403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.692 [2024-07-15 15:30:32.051409] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.051412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.051416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfd1ec0) 00:25:22.692 [2024-07-15 15:30:32.051421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.692 [2024-07-15 15:30:32.051427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.051431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.051434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfd1ec0) 00:25:22.692 [2024-07-15 15:30:32.051440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.692 [2024-07-15 15:30:32.051446] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.051450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.051453] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.692 [2024-07-15 15:30:32.051459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.692 [2024-07-15 15:30:32.051463] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:22.692 [2024-07-15 15:30:32.051473] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:22.692 [2024-07-15 15:30:32.051480] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.051483] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd1ec0) 00:25:22.692 [2024-07-15 15:30:32.051490] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.692 [2024-07-15 15:30:32.051501] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1054e40, cid 0, qid 0 00:25:22.692 [2024-07-15 15:30:32.051506] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1054fc0, cid 1, qid 0 00:25:22.692 [2024-07-15 15:30:32.051511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1055140, cid 2, qid 0 00:25:22.692 [2024-07-15 15:30:32.051515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.692 [2024-07-15 15:30:32.051520] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1055440, cid 4, qid 0 00:25:22.692 [2024-07-15 15:30:32.051788] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.692 [2024-07-15 15:30:32.051794] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.692 [2024-07-15 15:30:32.051798] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.051801] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1055440) on tqpair=0xfd1ec0 00:25:22.692 [2024-07-15 15:30:32.051806] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:22.692 [2024-07-15 15:30:32.051813] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:22.692 [2024-07-15 15:30:32.051822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.051826] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd1ec0) 00:25:22.692 [2024-07-15 15:30:32.051833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.692 [2024-07-15 15:30:32.051842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1055440, cid 4, qid 0 00:25:22.692 [2024-07-15 15:30:32.052051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.692 [2024-07-15 15:30:32.052059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.692 [2024-07-15 15:30:32.052062] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.052066] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd1ec0): datao=0, datal=4096, cccid=4 00:25:22.692 [2024-07-15 15:30:32.052070] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1055440) on tqpair(0xfd1ec0): expected_datao=0, payload_size=4096 00:25:22.692 [2024-07-15 15:30:32.052074] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.052081] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.052084] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.052241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.692 [2024-07-15 15:30:32.052247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.692 [2024-07-15 15:30:32.052250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.052254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1055440) on tqpair=0xfd1ec0 00:25:22.692 [2024-07-15 15:30:32.052265] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:22.692 [2024-07-15 15:30:32.052284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.052288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd1ec0) 00:25:22.692 [2024-07-15 15:30:32.052294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.692 [2024-07-15 15:30:32.052301] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.052304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.052308] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfd1ec0) 00:25:22.692 [2024-07-15 15:30:32.052314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.692 [2024-07-15 15:30:32.052326] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1055440, cid 4, qid 0 00:25:22.692 [2024-07-15 15:30:32.052332] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10555c0, cid 5, qid 0 00:25:22.692 [2024-07-15 15:30:32.052600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.692 [2024-07-15 15:30:32.052607] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.692 [2024-07-15 15:30:32.052610] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.692 [2024-07-15 15:30:32.052614] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd1ec0): datao=0, datal=1024, cccid=4 00:25:22.692 [2024-07-15 15:30:32.052618] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1055440) on tqpair(0xfd1ec0): expected_datao=0, payload_size=1024 00:25:22.692 [2024-07-15 15:30:32.052622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.052629] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.052632] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.052640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.693 [2024-07-15 15:30:32.052646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.693 [2024-07-15 15:30:32.052649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.052653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10555c0) on tqpair=0xfd1ec0 00:25:22.693 [2024-07-15 15:30:32.096891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.693 [2024-07-15 15:30:32.096900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.693 [2024-07-15 15:30:32.096904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.096907] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1055440) on tqpair=0xfd1ec0 00:25:22.693 [2024-07-15 15:30:32.096924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.096928] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd1ec0) 00:25:22.693 [2024-07-15 15:30:32.096935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.693 [2024-07-15 15:30:32.096949] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1055440, cid 4, qid 0 00:25:22.693 [2024-07-15 15:30:32.097153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.693 [2024-07-15 15:30:32.097159] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.693 [2024-07-15 15:30:32.097163] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.097166] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd1ec0): datao=0, datal=3072, cccid=4 00:25:22.693 [2024-07-15 15:30:32.097170] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1055440) on tqpair(0xfd1ec0): expected_datao=0, payload_size=3072 00:25:22.693 [2024-07-15 15:30:32.097175] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.097194] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.097198] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.097355] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.693 [2024-07-15 15:30:32.097362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.693 [2024-07-15 15:30:32.097365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.097369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1055440) on tqpair=0xfd1ec0 00:25:22.693 [2024-07-15 15:30:32.097377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.097381] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfd1ec0) 00:25:22.693 [2024-07-15 15:30:32.097387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.693 [2024-07-15 15:30:32.097399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1055440, cid 4, qid 0 00:25:22.693 [2024-07-15 15:30:32.097590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.693 [2024-07-15 15:30:32.097596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.693 [2024-07-15 15:30:32.097600] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.097603] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfd1ec0): datao=0, datal=8, cccid=4 00:25:22.693 [2024-07-15 15:30:32.097608] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1055440) on tqpair(0xfd1ec0): expected_datao=0, payload_size=8 00:25:22.693 [2024-07-15 15:30:32.097612] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.097618] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.097621] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.143892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.693 [2024-07-15 15:30:32.143908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.693 [2024-07-15 15:30:32.143911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.693 [2024-07-15 15:30:32.143916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1055440) on tqpair=0xfd1ec0 00:25:22.693 ===================================================== 00:25:22.693 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:22.693 ===================================================== 00:25:22.693 Controller Capabilities/Features 00:25:22.693 ================================ 00:25:22.693 Vendor ID: 0000 00:25:22.693 Subsystem Vendor ID: 0000 00:25:22.693 Serial Number: .................... 00:25:22.693 Model Number: ........................................ 00:25:22.693 Firmware Version: 24.09 00:25:22.693 Recommended Arb Burst: 0 00:25:22.693 IEEE OUI Identifier: 00 00 00 00:25:22.693 Multi-path I/O 00:25:22.693 May have multiple subsystem ports: No 00:25:22.693 May have multiple controllers: No 00:25:22.693 Associated with SR-IOV VF: No 00:25:22.693 Max Data Transfer Size: 131072 00:25:22.693 Max Number of Namespaces: 0 00:25:22.693 Max Number of I/O Queues: 1024 00:25:22.693 NVMe Specification Version (VS): 1.3 00:25:22.693 NVMe Specification Version (Identify): 1.3 00:25:22.693 Maximum Queue Entries: 128 00:25:22.693 Contiguous Queues Required: Yes 00:25:22.693 Arbitration Mechanisms Supported 00:25:22.693 Weighted Round Robin: Not Supported 00:25:22.693 Vendor Specific: Not Supported 00:25:22.693 Reset Timeout: 15000 ms 00:25:22.693 Doorbell Stride: 4 bytes 00:25:22.693 NVM Subsystem Reset: Not Supported 00:25:22.693 Command Sets Supported 00:25:22.693 NVM Command Set: Supported 00:25:22.693 Boot Partition: Not Supported 00:25:22.693 Memory Page Size Minimum: 4096 bytes 00:25:22.693 Memory Page Size Maximum: 4096 bytes 00:25:22.693 Persistent Memory Region: Not Supported 00:25:22.693 Optional Asynchronous Events Supported 00:25:22.693 Namespace Attribute Notices: Not Supported 00:25:22.693 Firmware Activation Notices: Not Supported 00:25:22.693 ANA Change Notices: Not Supported 00:25:22.693 PLE Aggregate Log Change Notices: Not Supported 00:25:22.693 LBA Status Info Alert Notices: Not Supported 00:25:22.693 EGE Aggregate Log Change Notices: Not Supported 00:25:22.693 Normal NVM Subsystem Shutdown event: Not Supported 00:25:22.693 Zone Descriptor Change Notices: Not Supported 00:25:22.693 Discovery Log Change Notices: Supported 00:25:22.693 Controller Attributes 00:25:22.693 128-bit Host Identifier: Not Supported 00:25:22.693 Non-Operational Permissive Mode: Not Supported 00:25:22.693 NVM Sets: Not Supported 00:25:22.693 Read Recovery Levels: Not Supported 00:25:22.693 Endurance Groups: Not Supported 00:25:22.693 Predictable Latency Mode: Not Supported 00:25:22.693 Traffic Based Keep ALive: Not Supported 00:25:22.693 Namespace Granularity: Not Supported 00:25:22.693 SQ Associations: Not Supported 00:25:22.693 UUID List: Not Supported 00:25:22.693 Multi-Domain Subsystem: Not Supported 00:25:22.693 Fixed Capacity Management: Not Supported 00:25:22.693 Variable Capacity Management: Not Supported 00:25:22.693 Delete Endurance Group: Not Supported 00:25:22.693 Delete NVM Set: Not Supported 00:25:22.693 Extended LBA Formats Supported: Not Supported 00:25:22.693 Flexible Data Placement Supported: Not Supported 00:25:22.693 00:25:22.693 Controller Memory Buffer Support 00:25:22.693 ================================ 00:25:22.693 Supported: No 00:25:22.693 00:25:22.693 Persistent Memory Region Support 00:25:22.693 ================================ 00:25:22.693 Supported: No 00:25:22.693 00:25:22.693 Admin Command Set Attributes 00:25:22.693 ============================ 00:25:22.693 Security Send/Receive: Not Supported 00:25:22.693 Format NVM: Not Supported 00:25:22.693 Firmware Activate/Download: Not Supported 00:25:22.693 Namespace Management: Not Supported 00:25:22.693 Device Self-Test: Not Supported 00:25:22.693 Directives: Not Supported 00:25:22.693 NVMe-MI: Not Supported 00:25:22.693 Virtualization Management: Not Supported 00:25:22.693 Doorbell Buffer Config: Not Supported 00:25:22.693 Get LBA Status Capability: Not Supported 00:25:22.693 Command & Feature Lockdown Capability: Not Supported 00:25:22.693 Abort Command Limit: 1 00:25:22.693 Async Event Request Limit: 4 00:25:22.693 Number of Firmware Slots: N/A 00:25:22.693 Firmware Slot 1 Read-Only: N/A 00:25:22.693 Firmware Activation Without Reset: N/A 00:25:22.693 Multiple Update Detection Support: N/A 00:25:22.693 Firmware Update Granularity: No Information Provided 00:25:22.693 Per-Namespace SMART Log: No 00:25:22.693 Asymmetric Namespace Access Log Page: Not Supported 00:25:22.693 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:22.693 Command Effects Log Page: Not Supported 00:25:22.693 Get Log Page Extended Data: Supported 00:25:22.693 Telemetry Log Pages: Not Supported 00:25:22.693 Persistent Event Log Pages: Not Supported 00:25:22.693 Supported Log Pages Log Page: May Support 00:25:22.693 Commands Supported & Effects Log Page: Not Supported 00:25:22.693 Feature Identifiers & Effects Log Page:May Support 00:25:22.693 NVMe-MI Commands & Effects Log Page: May Support 00:25:22.693 Data Area 4 for Telemetry Log: Not Supported 00:25:22.693 Error Log Page Entries Supported: 128 00:25:22.693 Keep Alive: Not Supported 00:25:22.693 00:25:22.693 NVM Command Set Attributes 00:25:22.693 ========================== 00:25:22.693 Submission Queue Entry Size 00:25:22.693 Max: 1 00:25:22.693 Min: 1 00:25:22.693 Completion Queue Entry Size 00:25:22.693 Max: 1 00:25:22.693 Min: 1 00:25:22.693 Number of Namespaces: 0 00:25:22.693 Compare Command: Not Supported 00:25:22.693 Write Uncorrectable Command: Not Supported 00:25:22.693 Dataset Management Command: Not Supported 00:25:22.693 Write Zeroes Command: Not Supported 00:25:22.693 Set Features Save Field: Not Supported 00:25:22.693 Reservations: Not Supported 00:25:22.694 Timestamp: Not Supported 00:25:22.694 Copy: Not Supported 00:25:22.694 Volatile Write Cache: Not Present 00:25:22.694 Atomic Write Unit (Normal): 1 00:25:22.694 Atomic Write Unit (PFail): 1 00:25:22.694 Atomic Compare & Write Unit: 1 00:25:22.694 Fused Compare & Write: Supported 00:25:22.694 Scatter-Gather List 00:25:22.694 SGL Command Set: Supported 00:25:22.694 SGL Keyed: Supported 00:25:22.694 SGL Bit Bucket Descriptor: Not Supported 00:25:22.694 SGL Metadata Pointer: Not Supported 00:25:22.694 Oversized SGL: Not Supported 00:25:22.694 SGL Metadata Address: Not Supported 00:25:22.694 SGL Offset: Supported 00:25:22.694 Transport SGL Data Block: Not Supported 00:25:22.694 Replay Protected Memory Block: Not Supported 00:25:22.694 00:25:22.694 Firmware Slot Information 00:25:22.694 ========================= 00:25:22.694 Active slot: 0 00:25:22.694 00:25:22.694 00:25:22.694 Error Log 00:25:22.694 ========= 00:25:22.694 00:25:22.694 Active Namespaces 00:25:22.694 ================= 00:25:22.694 Discovery Log Page 00:25:22.694 ================== 00:25:22.694 Generation Counter: 2 00:25:22.694 Number of Records: 2 00:25:22.694 Record Format: 0 00:25:22.694 00:25:22.694 Discovery Log Entry 0 00:25:22.694 ---------------------- 00:25:22.694 Transport Type: 3 (TCP) 00:25:22.694 Address Family: 1 (IPv4) 00:25:22.694 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:22.694 Entry Flags: 00:25:22.694 Duplicate Returned Information: 1 00:25:22.694 Explicit Persistent Connection Support for Discovery: 1 00:25:22.694 Transport Requirements: 00:25:22.694 Secure Channel: Not Required 00:25:22.694 Port ID: 0 (0x0000) 00:25:22.694 Controller ID: 65535 (0xffff) 00:25:22.694 Admin Max SQ Size: 128 00:25:22.694 Transport Service Identifier: 4420 00:25:22.694 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:22.694 Transport Address: 10.0.0.2 00:25:22.694 Discovery Log Entry 1 00:25:22.694 ---------------------- 00:25:22.694 Transport Type: 3 (TCP) 00:25:22.694 Address Family: 1 (IPv4) 00:25:22.694 Subsystem Type: 2 (NVM Subsystem) 00:25:22.694 Entry Flags: 00:25:22.694 Duplicate Returned Information: 0 00:25:22.694 Explicit Persistent Connection Support for Discovery: 0 00:25:22.694 Transport Requirements: 00:25:22.694 Secure Channel: Not Required 00:25:22.694 Port ID: 0 (0x0000) 00:25:22.694 Controller ID: 65535 (0xffff) 00:25:22.694 Admin Max SQ Size: 128 00:25:22.694 Transport Service Identifier: 4420 00:25:22.694 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:22.694 Transport Address: 10.0.0.2 [2024-07-15 15:30:32.144000] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:22.694 [2024-07-15 15:30:32.144010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1054e40) on tqpair=0xfd1ec0 00:25:22.694 [2024-07-15 15:30:32.144016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.694 [2024-07-15 15:30:32.144022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1054fc0) on tqpair=0xfd1ec0 00:25:22.694 [2024-07-15 15:30:32.144027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.694 [2024-07-15 15:30:32.144031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1055140) on tqpair=0xfd1ec0 00:25:22.694 [2024-07-15 15:30:32.144036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.694 [2024-07-15 15:30:32.144041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.694 [2024-07-15 15:30:32.144045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.694 [2024-07-15 15:30:32.144055] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.144059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.144063] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.694 [2024-07-15 15:30:32.144070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.694 [2024-07-15 15:30:32.144084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.694 [2024-07-15 15:30:32.144305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.694 [2024-07-15 15:30:32.144311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.694 [2024-07-15 15:30:32.144315] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.144318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.694 [2024-07-15 15:30:32.144325] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.144329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.144332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.694 [2024-07-15 15:30:32.144339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.694 [2024-07-15 15:30:32.144351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.694 [2024-07-15 15:30:32.144556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.694 [2024-07-15 15:30:32.144562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.694 [2024-07-15 15:30:32.144566] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.144571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.694 [2024-07-15 15:30:32.144576] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:22.694 [2024-07-15 15:30:32.144580] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:22.694 [2024-07-15 15:30:32.144589] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.144593] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.144599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.694 [2024-07-15 15:30:32.144605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.694 [2024-07-15 15:30:32.144615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.694 [2024-07-15 15:30:32.144770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.694 [2024-07-15 15:30:32.144776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.694 [2024-07-15 15:30:32.144780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.144784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.694 [2024-07-15 15:30:32.144793] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.144797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.144800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.694 [2024-07-15 15:30:32.144807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.694 [2024-07-15 15:30:32.144817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.694 [2024-07-15 15:30:32.145008] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.694 [2024-07-15 15:30:32.145015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.694 [2024-07-15 15:30:32.145018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.145022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.694 [2024-07-15 15:30:32.145031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.145035] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.145039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.694 [2024-07-15 15:30:32.145045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.694 [2024-07-15 15:30:32.145055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.694 [2024-07-15 15:30:32.145260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.694 [2024-07-15 15:30:32.145266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.694 [2024-07-15 15:30:32.145269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.145273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.694 [2024-07-15 15:30:32.145282] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.145286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.145290] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.694 [2024-07-15 15:30:32.145296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.694 [2024-07-15 15:30:32.145306] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.694 [2024-07-15 15:30:32.145513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.694 [2024-07-15 15:30:32.145519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.694 [2024-07-15 15:30:32.145523] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.145526] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.694 [2024-07-15 15:30:32.145536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.145539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.694 [2024-07-15 15:30:32.145543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.694 [2024-07-15 15:30:32.145551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.694 [2024-07-15 15:30:32.145561] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.694 [2024-07-15 15:30:32.145737] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.694 [2024-07-15 15:30:32.145743] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.694 [2024-07-15 15:30:32.145746] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.145750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.695 [2024-07-15 15:30:32.145759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.145763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.145767] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.695 [2024-07-15 15:30:32.145773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.695 [2024-07-15 15:30:32.145783] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.695 [2024-07-15 15:30:32.145966] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.695 [2024-07-15 15:30:32.145973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.695 [2024-07-15 15:30:32.145976] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.145980] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.695 [2024-07-15 15:30:32.145989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.145993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.145997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.695 [2024-07-15 15:30:32.146003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.695 [2024-07-15 15:30:32.146013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.695 [2024-07-15 15:30:32.146219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.695 [2024-07-15 15:30:32.146226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.695 [2024-07-15 15:30:32.146229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.146233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.695 [2024-07-15 15:30:32.146243] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.146246] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.146250] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.695 [2024-07-15 15:30:32.146256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.695 [2024-07-15 15:30:32.146266] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.695 [2024-07-15 15:30:32.146471] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.695 [2024-07-15 15:30:32.146477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.695 [2024-07-15 15:30:32.146480] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.146484] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.695 [2024-07-15 15:30:32.146494] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.146497] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.146501] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.695 [2024-07-15 15:30:32.146507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.695 [2024-07-15 15:30:32.146519] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.695 [2024-07-15 15:30:32.146721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.695 [2024-07-15 15:30:32.146728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.695 [2024-07-15 15:30:32.146731] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.146735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.695 [2024-07-15 15:30:32.146745] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.146748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.146752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.695 [2024-07-15 15:30:32.146759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.695 [2024-07-15 15:30:32.146768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.695 [2024-07-15 15:30:32.146973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.695 [2024-07-15 15:30:32.146979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.695 [2024-07-15 15:30:32.146983] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.146986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.695 [2024-07-15 15:30:32.146996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.146999] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.147003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.695 [2024-07-15 15:30:32.147009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.695 [2024-07-15 15:30:32.147019] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.695 [2024-07-15 15:30:32.147227] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.695 [2024-07-15 15:30:32.147233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.695 [2024-07-15 15:30:32.147236] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.147240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.695 [2024-07-15 15:30:32.147249] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.147253] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.147256] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.695 [2024-07-15 15:30:32.147263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.695 [2024-07-15 15:30:32.147273] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.695 [2024-07-15 15:30:32.147477] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.695 [2024-07-15 15:30:32.147483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.695 [2024-07-15 15:30:32.147487] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.147490] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.695 [2024-07-15 15:30:32.147500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.147503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.147507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.695 [2024-07-15 15:30:32.147513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.695 [2024-07-15 15:30:32.147523] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.695 [2024-07-15 15:30:32.147688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.695 [2024-07-15 15:30:32.147695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.695 [2024-07-15 15:30:32.147698] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.147702] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.695 [2024-07-15 15:30:32.147711] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.147715] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.147718] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.695 [2024-07-15 15:30:32.147725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.695 [2024-07-15 15:30:32.147734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.695 [2024-07-15 15:30:32.151892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.695 [2024-07-15 15:30:32.151900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.695 [2024-07-15 15:30:32.151904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.695 [2024-07-15 15:30:32.151907] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.695 [2024-07-15 15:30:32.151917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.151921] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.151925] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfd1ec0) 00:25:22.696 [2024-07-15 15:30:32.151931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.696 [2024-07-15 15:30:32.151942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10552c0, cid 3, qid 0 00:25:22.696 [2024-07-15 15:30:32.152179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.696 [2024-07-15 15:30:32.152185] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.696 [2024-07-15 15:30:32.152188] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.152192] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10552c0) on tqpair=0xfd1ec0 00:25:22.696 [2024-07-15 15:30:32.152199] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:25:22.696 00:25:22.696 15:30:32 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:22.696 [2024-07-15 15:30:32.191025] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:25:22.696 [2024-07-15 15:30:32.191077] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid804713 ] 00:25:22.696 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.696 [2024-07-15 15:30:32.222438] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:22.696 [2024-07-15 15:30:32.222481] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:22.696 [2024-07-15 15:30:32.222486] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:22.696 [2024-07-15 15:30:32.222497] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:22.696 [2024-07-15 15:30:32.222503] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:22.696 [2024-07-15 15:30:32.225915] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:22.696 [2024-07-15 15:30:32.225939] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2236ec0 0 00:25:22.696 [2024-07-15 15:30:32.233895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:22.696 [2024-07-15 15:30:32.233904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:22.696 [2024-07-15 15:30:32.233908] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:22.696 [2024-07-15 15:30:32.233911] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:22.696 [2024-07-15 15:30:32.233939] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.233944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.233947] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236ec0) 00:25:22.696 [2024-07-15 15:30:32.233957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:22.696 [2024-07-15 15:30:32.233972] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22b9e40, cid 0, qid 0 00:25:22.696 [2024-07-15 15:30:32.241893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.696 [2024-07-15 15:30:32.241901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.696 [2024-07-15 15:30:32.241904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.241909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22b9e40) on tqpair=0x2236ec0 00:25:22.696 [2024-07-15 15:30:32.241919] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:22.696 [2024-07-15 15:30:32.241925] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:22.696 [2024-07-15 15:30:32.241930] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:22.696 [2024-07-15 15:30:32.241941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.241945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.241949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236ec0) 00:25:22.696 [2024-07-15 15:30:32.241956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.696 [2024-07-15 15:30:32.241968] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22b9e40, cid 0, qid 0 00:25:22.696 [2024-07-15 15:30:32.242134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.696 [2024-07-15 15:30:32.242140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.696 [2024-07-15 15:30:32.242143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.242147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22b9e40) on tqpair=0x2236ec0 00:25:22.696 [2024-07-15 15:30:32.242152] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:22.696 [2024-07-15 15:30:32.242159] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:22.696 [2024-07-15 15:30:32.242165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.242169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.242172] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236ec0) 00:25:22.696 [2024-07-15 15:30:32.242179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.696 [2024-07-15 15:30:32.242189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22b9e40, cid 0, qid 0 00:25:22.696 [2024-07-15 15:30:32.242345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.696 [2024-07-15 15:30:32.242354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.696 [2024-07-15 15:30:32.242357] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.242361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22b9e40) on tqpair=0x2236ec0 00:25:22.696 [2024-07-15 15:30:32.242366] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:22.696 [2024-07-15 15:30:32.242374] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:22.696 [2024-07-15 15:30:32.242380] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.242383] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.242387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236ec0) 00:25:22.696 [2024-07-15 15:30:32.242393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.696 [2024-07-15 15:30:32.242404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22b9e40, cid 0, qid 0 00:25:22.696 [2024-07-15 15:30:32.242560] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.696 [2024-07-15 15:30:32.242566] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.696 [2024-07-15 15:30:32.242570] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.242574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22b9e40) on tqpair=0x2236ec0 00:25:22.696 [2024-07-15 15:30:32.242578] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:22.696 [2024-07-15 15:30:32.242587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.242591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.242594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236ec0) 00:25:22.696 [2024-07-15 15:30:32.242601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.696 [2024-07-15 15:30:32.242611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22b9e40, cid 0, qid 0 00:25:22.696 [2024-07-15 15:30:32.242758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.696 [2024-07-15 15:30:32.242764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.696 [2024-07-15 15:30:32.242767] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.242771] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22b9e40) on tqpair=0x2236ec0 00:25:22.696 [2024-07-15 15:30:32.242775] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:22.696 [2024-07-15 15:30:32.242780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:22.696 [2024-07-15 15:30:32.242787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:22.696 [2024-07-15 15:30:32.242892] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:22.696 [2024-07-15 15:30:32.242896] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:22.696 [2024-07-15 15:30:32.242903] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.242907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.242910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236ec0) 00:25:22.696 [2024-07-15 15:30:32.242916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.696 [2024-07-15 15:30:32.242929] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22b9e40, cid 0, qid 0 00:25:22.696 [2024-07-15 15:30:32.243108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.696 [2024-07-15 15:30:32.243114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.696 [2024-07-15 15:30:32.243118] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.243121] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22b9e40) on tqpair=0x2236ec0 00:25:22.696 [2024-07-15 15:30:32.243126] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:22.696 [2024-07-15 15:30:32.243135] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.243138] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.243142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236ec0) 00:25:22.696 [2024-07-15 15:30:32.243148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.696 [2024-07-15 15:30:32.243158] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22b9e40, cid 0, qid 0 00:25:22.696 [2024-07-15 15:30:32.243366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.696 [2024-07-15 15:30:32.243372] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.696 [2024-07-15 15:30:32.243375] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.696 [2024-07-15 15:30:32.243379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22b9e40) on tqpair=0x2236ec0 00:25:22.696 [2024-07-15 15:30:32.243383] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:22.696 [2024-07-15 15:30:32.243388] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:22.696 [2024-07-15 15:30:32.243395] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:22.696 [2024-07-15 15:30:32.243402] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:22.697 [2024-07-15 15:30:32.243410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.243413] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236ec0) 00:25:22.697 [2024-07-15 15:30:32.243420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.697 [2024-07-15 15:30:32.243430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22b9e40, cid 0, qid 0 00:25:22.697 [2024-07-15 15:30:32.243645] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.697 [2024-07-15 15:30:32.243651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.697 [2024-07-15 15:30:32.243655] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.243658] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2236ec0): datao=0, datal=4096, cccid=0 00:25:22.697 [2024-07-15 15:30:32.243663] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22b9e40) on tqpair(0x2236ec0): expected_datao=0, payload_size=4096 00:25:22.697 [2024-07-15 15:30:32.243667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.243684] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.243688] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284028] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.697 [2024-07-15 15:30:32.284038] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.697 [2024-07-15 15:30:32.284042] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284048] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22b9e40) on tqpair=0x2236ec0 00:25:22.697 [2024-07-15 15:30:32.284055] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:22.697 [2024-07-15 15:30:32.284063] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:22.697 [2024-07-15 15:30:32.284067] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:22.697 [2024-07-15 15:30:32.284071] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:22.697 [2024-07-15 15:30:32.284076] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:22.697 [2024-07-15 15:30:32.284080] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:22.697 [2024-07-15 15:30:32.284088] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:22.697 [2024-07-15 15:30:32.284095] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284098] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284102] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236ec0) 00:25:22.697 [2024-07-15 15:30:32.284109] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:22.697 [2024-07-15 15:30:32.284120] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22b9e40, cid 0, qid 0 00:25:22.697 [2024-07-15 15:30:32.284259] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.697 [2024-07-15 15:30:32.284266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.697 [2024-07-15 15:30:32.284269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22b9e40) on tqpair=0x2236ec0 00:25:22.697 [2024-07-15 15:30:32.284279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284283] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284287] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236ec0) 00:25:22.697 [2024-07-15 15:30:32.284292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.697 [2024-07-15 15:30:32.284298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2236ec0) 00:25:22.697 [2024-07-15 15:30:32.284311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.697 [2024-07-15 15:30:32.284317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2236ec0) 00:25:22.697 [2024-07-15 15:30:32.284329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.697 [2024-07-15 15:30:32.284335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284339] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284342] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236ec0) 00:25:22.697 [2024-07-15 15:30:32.284348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.697 [2024-07-15 15:30:32.284352] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:22.697 [2024-07-15 15:30:32.284364] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:22.697 [2024-07-15 15:30:32.284370] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2236ec0) 00:25:22.697 [2024-07-15 15:30:32.284380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.697 [2024-07-15 15:30:32.284391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22b9e40, cid 0, qid 0 00:25:22.697 [2024-07-15 15:30:32.284397] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22b9fc0, cid 1, qid 0 00:25:22.697 [2024-07-15 15:30:32.284401] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba140, cid 2, qid 0 00:25:22.697 [2024-07-15 15:30:32.284406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba2c0, cid 3, qid 0 00:25:22.697 [2024-07-15 15:30:32.284411] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba440, cid 4, qid 0 00:25:22.697 [2024-07-15 15:30:32.284589] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.697 [2024-07-15 15:30:32.284596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.697 [2024-07-15 15:30:32.284599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284603] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba440) on tqpair=0x2236ec0 00:25:22.697 [2024-07-15 15:30:32.284607] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:22.697 [2024-07-15 15:30:32.284612] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:22.697 [2024-07-15 15:30:32.284620] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:22.697 [2024-07-15 15:30:32.284626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:22.697 [2024-07-15 15:30:32.284632] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284639] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2236ec0) 00:25:22.697 [2024-07-15 15:30:32.284645] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:22.697 [2024-07-15 15:30:32.284655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba440, cid 4, qid 0 00:25:22.697 [2024-07-15 15:30:32.284805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.697 [2024-07-15 15:30:32.284812] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.697 [2024-07-15 15:30:32.284815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba440) on tqpair=0x2236ec0 00:25:22.697 [2024-07-15 15:30:32.284888] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:22.697 [2024-07-15 15:30:32.284897] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:22.697 [2024-07-15 15:30:32.284904] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.284908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2236ec0) 00:25:22.697 [2024-07-15 15:30:32.284914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.697 [2024-07-15 15:30:32.284926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba440, cid 4, qid 0 00:25:22.697 [2024-07-15 15:30:32.285111] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.697 [2024-07-15 15:30:32.285118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.697 [2024-07-15 15:30:32.285121] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.285125] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2236ec0): datao=0, datal=4096, cccid=4 00:25:22.697 [2024-07-15 15:30:32.285129] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22ba440) on tqpair(0x2236ec0): expected_datao=0, payload_size=4096 00:25:22.697 [2024-07-15 15:30:32.285133] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.285140] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.285143] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.285309] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.697 [2024-07-15 15:30:32.285315] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.697 [2024-07-15 15:30:32.285319] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.285322] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba440) on tqpair=0x2236ec0 00:25:22.697 [2024-07-15 15:30:32.285330] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:22.697 [2024-07-15 15:30:32.285342] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:22.697 [2024-07-15 15:30:32.285351] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:22.697 [2024-07-15 15:30:32.285358] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.285362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2236ec0) 00:25:22.697 [2024-07-15 15:30:32.285368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.697 [2024-07-15 15:30:32.285378] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba440, cid 4, qid 0 00:25:22.697 [2024-07-15 15:30:32.285559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.697 [2024-07-15 15:30:32.285565] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.697 [2024-07-15 15:30:32.285569] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.697 [2024-07-15 15:30:32.285572] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2236ec0): datao=0, datal=4096, cccid=4 00:25:22.698 [2024-07-15 15:30:32.285577] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22ba440) on tqpair(0x2236ec0): expected_datao=0, payload_size=4096 00:25:22.698 [2024-07-15 15:30:32.285581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.285587] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.285591] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.285753] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.698 [2024-07-15 15:30:32.285760] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.698 [2024-07-15 15:30:32.285763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.285767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba440) on tqpair=0x2236ec0 00:25:22.698 [2024-07-15 15:30:32.285778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:22.698 [2024-07-15 15:30:32.285787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:22.698 [2024-07-15 15:30:32.285794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.285801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2236ec0) 00:25:22.698 [2024-07-15 15:30:32.285807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.698 [2024-07-15 15:30:32.285818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba440, cid 4, qid 0 00:25:22.698 [2024-07-15 15:30:32.289893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.698 [2024-07-15 15:30:32.289901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.698 [2024-07-15 15:30:32.289905] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.289908] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2236ec0): datao=0, datal=4096, cccid=4 00:25:22.698 [2024-07-15 15:30:32.289912] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22ba440) on tqpair(0x2236ec0): expected_datao=0, payload_size=4096 00:25:22.698 [2024-07-15 15:30:32.289917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.289923] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.289926] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.289932] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.698 [2024-07-15 15:30:32.289938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.698 [2024-07-15 15:30:32.289941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.289945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba440) on tqpair=0x2236ec0 00:25:22.698 [2024-07-15 15:30:32.289952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:22.698 [2024-07-15 15:30:32.289959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:22.698 [2024-07-15 15:30:32.289967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:22.698 [2024-07-15 15:30:32.289973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:22.698 [2024-07-15 15:30:32.289978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:22.698 [2024-07-15 15:30:32.289983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:22.698 [2024-07-15 15:30:32.289988] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:22.698 [2024-07-15 15:30:32.289992] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:22.698 [2024-07-15 15:30:32.289997] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:22.698 [2024-07-15 15:30:32.290010] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.290014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2236ec0) 00:25:22.698 [2024-07-15 15:30:32.290020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.698 [2024-07-15 15:30:32.290026] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.290030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.290033] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2236ec0) 00:25:22.698 [2024-07-15 15:30:32.290039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:22.698 [2024-07-15 15:30:32.290053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba440, cid 4, qid 0 00:25:22.698 [2024-07-15 15:30:32.290069] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba5c0, cid 5, qid 0 00:25:22.698 [2024-07-15 15:30:32.290272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.698 [2024-07-15 15:30:32.290278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.698 [2024-07-15 15:30:32.290282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.290286] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba440) on tqpair=0x2236ec0 00:25:22.698 [2024-07-15 15:30:32.290292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.698 [2024-07-15 15:30:32.290298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.698 [2024-07-15 15:30:32.290301] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.290305] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba5c0) on tqpair=0x2236ec0 00:25:22.698 [2024-07-15 15:30:32.290314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.290317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2236ec0) 00:25:22.698 [2024-07-15 15:30:32.290324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.698 [2024-07-15 15:30:32.290333] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba5c0, cid 5, qid 0 00:25:22.698 [2024-07-15 15:30:32.290531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.698 [2024-07-15 15:30:32.290537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.698 [2024-07-15 15:30:32.290541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.290544] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba5c0) on tqpair=0x2236ec0 00:25:22.698 [2024-07-15 15:30:32.290554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.290557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2236ec0) 00:25:22.698 [2024-07-15 15:30:32.290564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.698 [2024-07-15 15:30:32.290573] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba5c0, cid 5, qid 0 00:25:22.698 [2024-07-15 15:30:32.290716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.698 [2024-07-15 15:30:32.290723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.698 [2024-07-15 15:30:32.290726] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.290730] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba5c0) on tqpair=0x2236ec0 00:25:22.698 [2024-07-15 15:30:32.290739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.290742] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2236ec0) 00:25:22.698 [2024-07-15 15:30:32.290749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.698 [2024-07-15 15:30:32.290758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba5c0, cid 5, qid 0 00:25:22.698 [2024-07-15 15:30:32.290941] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.698 [2024-07-15 15:30:32.290948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.698 [2024-07-15 15:30:32.290951] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.290955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba5c0) on tqpair=0x2236ec0 00:25:22.698 [2024-07-15 15:30:32.290968] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.290972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2236ec0) 00:25:22.698 [2024-07-15 15:30:32.290979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.698 [2024-07-15 15:30:32.290988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.290991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2236ec0) 00:25:22.698 [2024-07-15 15:30:32.290998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.698 [2024-07-15 15:30:32.291005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.291008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2236ec0) 00:25:22.698 [2024-07-15 15:30:32.291014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.698 [2024-07-15 15:30:32.291021] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.291025] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2236ec0) 00:25:22.698 [2024-07-15 15:30:32.291031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.698 [2024-07-15 15:30:32.291042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba5c0, cid 5, qid 0 00:25:22.698 [2024-07-15 15:30:32.291047] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba440, cid 4, qid 0 00:25:22.698 [2024-07-15 15:30:32.291052] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba740, cid 6, qid 0 00:25:22.698 [2024-07-15 15:30:32.291056] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba8c0, cid 7, qid 0 00:25:22.698 [2024-07-15 15:30:32.291276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.698 [2024-07-15 15:30:32.291283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.698 [2024-07-15 15:30:32.291286] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.291290] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2236ec0): datao=0, datal=8192, cccid=5 00:25:22.698 [2024-07-15 15:30:32.291294] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22ba5c0) on tqpair(0x2236ec0): expected_datao=0, payload_size=8192 00:25:22.698 [2024-07-15 15:30:32.291298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.291390] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.291394] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.291400] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.698 [2024-07-15 15:30:32.291406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.698 [2024-07-15 15:30:32.291409] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.291412] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2236ec0): datao=0, datal=512, cccid=4 00:25:22.698 [2024-07-15 15:30:32.291417] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22ba440) on tqpair(0x2236ec0): expected_datao=0, payload_size=512 00:25:22.698 [2024-07-15 15:30:32.291421] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.291427] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.698 [2024-07-15 15:30:32.291430] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.699 [2024-07-15 15:30:32.291436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.699 [2024-07-15 15:30:32.291442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.699 [2024-07-15 15:30:32.291445] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.699 [2024-07-15 15:30:32.291448] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2236ec0): datao=0, datal=512, cccid=6 00:25:22.699 [2024-07-15 15:30:32.291453] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22ba740) on tqpair(0x2236ec0): expected_datao=0, payload_size=512 00:25:22.699 [2024-07-15 15:30:32.291458] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.699 [2024-07-15 15:30:32.291465] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.699 [2024-07-15 15:30:32.291468] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.699 [2024-07-15 15:30:32.291474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:22.699 [2024-07-15 15:30:32.291479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:22.699 [2024-07-15 15:30:32.291483] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:22.699 [2024-07-15 15:30:32.291486] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2236ec0): datao=0, datal=4096, cccid=7 00:25:22.699 [2024-07-15 15:30:32.291490] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22ba8c0) on tqpair(0x2236ec0): expected_datao=0, payload_size=4096 00:25:22.699 [2024-07-15 15:30:32.291494] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.699 [2024-07-15 15:30:32.291505] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:22.699 [2024-07-15 15:30:32.291509] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:22.699 [2024-07-15 15:30:32.291668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.699 [2024-07-15 15:30:32.291675] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.699 [2024-07-15 15:30:32.291678] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.699 [2024-07-15 15:30:32.291682] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba5c0) on tqpair=0x2236ec0 00:25:22.699 [2024-07-15 15:30:32.291694] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.699 [2024-07-15 15:30:32.291700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.699 [2024-07-15 15:30:32.291703] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.699 [2024-07-15 15:30:32.291707] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba440) on tqpair=0x2236ec0 00:25:22.699 [2024-07-15 15:30:32.291717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.699 [2024-07-15 15:30:32.291723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.699 [2024-07-15 15:30:32.291726] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.699 [2024-07-15 15:30:32.291730] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba740) on tqpair=0x2236ec0 00:25:22.699 [2024-07-15 15:30:32.291737] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.699 [2024-07-15 15:30:32.291743] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.699 [2024-07-15 15:30:32.291746] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.699 [2024-07-15 15:30:32.291750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba8c0) on tqpair=0x2236ec0 00:25:22.699 ===================================================== 00:25:22.699 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:22.699 ===================================================== 00:25:22.699 Controller Capabilities/Features 00:25:22.699 ================================ 00:25:22.699 Vendor ID: 8086 00:25:22.699 Subsystem Vendor ID: 8086 00:25:22.699 Serial Number: SPDK00000000000001 00:25:22.699 Model Number: SPDK bdev Controller 00:25:22.699 Firmware Version: 24.09 00:25:22.699 Recommended Arb Burst: 6 00:25:22.699 IEEE OUI Identifier: e4 d2 5c 00:25:22.699 Multi-path I/O 00:25:22.699 May have multiple subsystem ports: Yes 00:25:22.699 May have multiple controllers: Yes 00:25:22.699 Associated with SR-IOV VF: No 00:25:22.699 Max Data Transfer Size: 131072 00:25:22.699 Max Number of Namespaces: 32 00:25:22.699 Max Number of I/O Queues: 127 00:25:22.699 NVMe Specification Version (VS): 1.3 00:25:22.699 NVMe Specification Version (Identify): 1.3 00:25:22.699 Maximum Queue Entries: 128 00:25:22.699 Contiguous Queues Required: Yes 00:25:22.699 Arbitration Mechanisms Supported 00:25:22.699 Weighted Round Robin: Not Supported 00:25:22.699 Vendor Specific: Not Supported 00:25:22.699 Reset Timeout: 15000 ms 00:25:22.699 Doorbell Stride: 4 bytes 00:25:22.699 NVM Subsystem Reset: Not Supported 00:25:22.699 Command Sets Supported 00:25:22.699 NVM Command Set: Supported 00:25:22.699 Boot Partition: Not Supported 00:25:22.699 Memory Page Size Minimum: 4096 bytes 00:25:22.699 Memory Page Size Maximum: 4096 bytes 00:25:22.699 Persistent Memory Region: Not Supported 00:25:22.699 Optional Asynchronous Events Supported 00:25:22.699 Namespace Attribute Notices: Supported 00:25:22.699 Firmware Activation Notices: Not Supported 00:25:22.699 ANA Change Notices: Not Supported 00:25:22.699 PLE Aggregate Log Change Notices: Not Supported 00:25:22.699 LBA Status Info Alert Notices: Not Supported 00:25:22.699 EGE Aggregate Log Change Notices: Not Supported 00:25:22.699 Normal NVM Subsystem Shutdown event: Not Supported 00:25:22.699 Zone Descriptor Change Notices: Not Supported 00:25:22.699 Discovery Log Change Notices: Not Supported 00:25:22.699 Controller Attributes 00:25:22.699 128-bit Host Identifier: Supported 00:25:22.699 Non-Operational Permissive Mode: Not Supported 00:25:22.699 NVM Sets: Not Supported 00:25:22.699 Read Recovery Levels: Not Supported 00:25:22.699 Endurance Groups: Not Supported 00:25:22.699 Predictable Latency Mode: Not Supported 00:25:22.699 Traffic Based Keep ALive: Not Supported 00:25:22.699 Namespace Granularity: Not Supported 00:25:22.699 SQ Associations: Not Supported 00:25:22.699 UUID List: Not Supported 00:25:22.699 Multi-Domain Subsystem: Not Supported 00:25:22.699 Fixed Capacity Management: Not Supported 00:25:22.699 Variable Capacity Management: Not Supported 00:25:22.699 Delete Endurance Group: Not Supported 00:25:22.699 Delete NVM Set: Not Supported 00:25:22.699 Extended LBA Formats Supported: Not Supported 00:25:22.699 Flexible Data Placement Supported: Not Supported 00:25:22.699 00:25:22.699 Controller Memory Buffer Support 00:25:22.699 ================================ 00:25:22.699 Supported: No 00:25:22.699 00:25:22.699 Persistent Memory Region Support 00:25:22.699 ================================ 00:25:22.699 Supported: No 00:25:22.699 00:25:22.699 Admin Command Set Attributes 00:25:22.699 ============================ 00:25:22.699 Security Send/Receive: Not Supported 00:25:22.699 Format NVM: Not Supported 00:25:22.699 Firmware Activate/Download: Not Supported 00:25:22.699 Namespace Management: Not Supported 00:25:22.699 Device Self-Test: Not Supported 00:25:22.699 Directives: Not Supported 00:25:22.699 NVMe-MI: Not Supported 00:25:22.699 Virtualization Management: Not Supported 00:25:22.699 Doorbell Buffer Config: Not Supported 00:25:22.699 Get LBA Status Capability: Not Supported 00:25:22.699 Command & Feature Lockdown Capability: Not Supported 00:25:22.699 Abort Command Limit: 4 00:25:22.699 Async Event Request Limit: 4 00:25:22.699 Number of Firmware Slots: N/A 00:25:22.699 Firmware Slot 1 Read-Only: N/A 00:25:22.699 Firmware Activation Without Reset: N/A 00:25:22.699 Multiple Update Detection Support: N/A 00:25:22.699 Firmware Update Granularity: No Information Provided 00:25:22.699 Per-Namespace SMART Log: No 00:25:22.699 Asymmetric Namespace Access Log Page: Not Supported 00:25:22.699 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:22.699 Command Effects Log Page: Supported 00:25:22.699 Get Log Page Extended Data: Supported 00:25:22.699 Telemetry Log Pages: Not Supported 00:25:22.699 Persistent Event Log Pages: Not Supported 00:25:22.699 Supported Log Pages Log Page: May Support 00:25:22.699 Commands Supported & Effects Log Page: Not Supported 00:25:22.699 Feature Identifiers & Effects Log Page:May Support 00:25:22.699 NVMe-MI Commands & Effects Log Page: May Support 00:25:22.699 Data Area 4 for Telemetry Log: Not Supported 00:25:22.699 Error Log Page Entries Supported: 128 00:25:22.699 Keep Alive: Supported 00:25:22.699 Keep Alive Granularity: 10000 ms 00:25:22.699 00:25:22.699 NVM Command Set Attributes 00:25:22.699 ========================== 00:25:22.699 Submission Queue Entry Size 00:25:22.699 Max: 64 00:25:22.699 Min: 64 00:25:22.699 Completion Queue Entry Size 00:25:22.699 Max: 16 00:25:22.699 Min: 16 00:25:22.699 Number of Namespaces: 32 00:25:22.699 Compare Command: Supported 00:25:22.699 Write Uncorrectable Command: Not Supported 00:25:22.699 Dataset Management Command: Supported 00:25:22.699 Write Zeroes Command: Supported 00:25:22.699 Set Features Save Field: Not Supported 00:25:22.699 Reservations: Supported 00:25:22.699 Timestamp: Not Supported 00:25:22.699 Copy: Supported 00:25:22.699 Volatile Write Cache: Present 00:25:22.699 Atomic Write Unit (Normal): 1 00:25:22.699 Atomic Write Unit (PFail): 1 00:25:22.699 Atomic Compare & Write Unit: 1 00:25:22.699 Fused Compare & Write: Supported 00:25:22.699 Scatter-Gather List 00:25:22.699 SGL Command Set: Supported 00:25:22.699 SGL Keyed: Supported 00:25:22.699 SGL Bit Bucket Descriptor: Not Supported 00:25:22.699 SGL Metadata Pointer: Not Supported 00:25:22.699 Oversized SGL: Not Supported 00:25:22.699 SGL Metadata Address: Not Supported 00:25:22.699 SGL Offset: Supported 00:25:22.699 Transport SGL Data Block: Not Supported 00:25:22.699 Replay Protected Memory Block: Not Supported 00:25:22.699 00:25:22.699 Firmware Slot Information 00:25:22.699 ========================= 00:25:22.699 Active slot: 1 00:25:22.699 Slot 1 Firmware Revision: 24.09 00:25:22.699 00:25:22.699 00:25:22.699 Commands Supported and Effects 00:25:22.699 ============================== 00:25:22.699 Admin Commands 00:25:22.699 -------------- 00:25:22.699 Get Log Page (02h): Supported 00:25:22.699 Identify (06h): Supported 00:25:22.700 Abort (08h): Supported 00:25:22.700 Set Features (09h): Supported 00:25:22.700 Get Features (0Ah): Supported 00:25:22.700 Asynchronous Event Request (0Ch): Supported 00:25:22.700 Keep Alive (18h): Supported 00:25:22.700 I/O Commands 00:25:22.700 ------------ 00:25:22.700 Flush (00h): Supported LBA-Change 00:25:22.700 Write (01h): Supported LBA-Change 00:25:22.700 Read (02h): Supported 00:25:22.700 Compare (05h): Supported 00:25:22.700 Write Zeroes (08h): Supported LBA-Change 00:25:22.700 Dataset Management (09h): Supported LBA-Change 00:25:22.700 Copy (19h): Supported LBA-Change 00:25:22.700 00:25:22.700 Error Log 00:25:22.700 ========= 00:25:22.700 00:25:22.700 Arbitration 00:25:22.700 =========== 00:25:22.700 Arbitration Burst: 1 00:25:22.700 00:25:22.700 Power Management 00:25:22.700 ================ 00:25:22.700 Number of Power States: 1 00:25:22.700 Current Power State: Power State #0 00:25:22.700 Power State #0: 00:25:22.700 Max Power: 0.00 W 00:25:22.700 Non-Operational State: Operational 00:25:22.700 Entry Latency: Not Reported 00:25:22.700 Exit Latency: Not Reported 00:25:22.700 Relative Read Throughput: 0 00:25:22.700 Relative Read Latency: 0 00:25:22.700 Relative Write Throughput: 0 00:25:22.700 Relative Write Latency: 0 00:25:22.700 Idle Power: Not Reported 00:25:22.700 Active Power: Not Reported 00:25:22.700 Non-Operational Permissive Mode: Not Supported 00:25:22.700 00:25:22.700 Health Information 00:25:22.700 ================== 00:25:22.700 Critical Warnings: 00:25:22.700 Available Spare Space: OK 00:25:22.700 Temperature: OK 00:25:22.700 Device Reliability: OK 00:25:22.700 Read Only: No 00:25:22.700 Volatile Memory Backup: OK 00:25:22.700 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:22.700 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:22.700 Available Spare: 0% 00:25:22.700 Available Spare Threshold: 0% 00:25:22.700 Life Percentage Used:[2024-07-15 15:30:32.291849] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.291854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2236ec0) 00:25:22.700 [2024-07-15 15:30:32.291861] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.700 [2024-07-15 15:30:32.291872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba8c0, cid 7, qid 0 00:25:22.700 [2024-07-15 15:30:32.292024] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.700 [2024-07-15 15:30:32.292031] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.700 [2024-07-15 15:30:32.292035] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.292038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba8c0) on tqpair=0x2236ec0 00:25:22.700 [2024-07-15 15:30:32.292070] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:22.700 [2024-07-15 15:30:32.292079] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22b9e40) on tqpair=0x2236ec0 00:25:22.700 [2024-07-15 15:30:32.292085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.700 [2024-07-15 15:30:32.292092] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22b9fc0) on tqpair=0x2236ec0 00:25:22.700 [2024-07-15 15:30:32.292096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.700 [2024-07-15 15:30:32.292101] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba140) on tqpair=0x2236ec0 00:25:22.700 [2024-07-15 15:30:32.292106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.700 [2024-07-15 15:30:32.292110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba2c0) on tqpair=0x2236ec0 00:25:22.700 [2024-07-15 15:30:32.292115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.700 [2024-07-15 15:30:32.292123] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.292126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.292130] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236ec0) 00:25:22.700 [2024-07-15 15:30:32.292137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.700 [2024-07-15 15:30:32.292148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba2c0, cid 3, qid 0 00:25:22.700 [2024-07-15 15:30:32.292337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.700 [2024-07-15 15:30:32.292344] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.700 [2024-07-15 15:30:32.292347] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.292351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba2c0) on tqpair=0x2236ec0 00:25:22.700 [2024-07-15 15:30:32.292357] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.292361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.292364] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236ec0) 00:25:22.700 [2024-07-15 15:30:32.292371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.700 [2024-07-15 15:30:32.292383] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba2c0, cid 3, qid 0 00:25:22.700 [2024-07-15 15:30:32.292604] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.700 [2024-07-15 15:30:32.292611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.700 [2024-07-15 15:30:32.292614] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.292618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba2c0) on tqpair=0x2236ec0 00:25:22.700 [2024-07-15 15:30:32.292622] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:22.700 [2024-07-15 15:30:32.292627] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:22.700 [2024-07-15 15:30:32.292636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.292639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.292643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236ec0) 00:25:22.700 [2024-07-15 15:30:32.292649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.700 [2024-07-15 15:30:32.292659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba2c0, cid 3, qid 0 00:25:22.700 [2024-07-15 15:30:32.292832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.700 [2024-07-15 15:30:32.292838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.700 [2024-07-15 15:30:32.292841] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.292847] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba2c0) on tqpair=0x2236ec0 00:25:22.700 [2024-07-15 15:30:32.292857] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.292861] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.292864] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236ec0) 00:25:22.700 [2024-07-15 15:30:32.292871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.700 [2024-07-15 15:30:32.292880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba2c0, cid 3, qid 0 00:25:22.700 [2024-07-15 15:30:32.293053] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.700 [2024-07-15 15:30:32.293059] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.700 [2024-07-15 15:30:32.293063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.293066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba2c0) on tqpair=0x2236ec0 00:25:22.700 [2024-07-15 15:30:32.293076] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.293080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.293083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236ec0) 00:25:22.700 [2024-07-15 15:30:32.293090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.700 [2024-07-15 15:30:32.293099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba2c0, cid 3, qid 0 00:25:22.700 [2024-07-15 15:30:32.293315] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.700 [2024-07-15 15:30:32.293321] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.700 [2024-07-15 15:30:32.293324] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.293328] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba2c0) on tqpair=0x2236ec0 00:25:22.700 [2024-07-15 15:30:32.293337] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.293341] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.700 [2024-07-15 15:30:32.293345] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236ec0) 00:25:22.700 [2024-07-15 15:30:32.293351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.700 [2024-07-15 15:30:32.293361] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba2c0, cid 3, qid 0 00:25:22.701 [2024-07-15 15:30:32.293578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.701 [2024-07-15 15:30:32.293585] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.701 [2024-07-15 15:30:32.293588] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.701 [2024-07-15 15:30:32.293592] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba2c0) on tqpair=0x2236ec0 00:25:22.701 [2024-07-15 15:30:32.293601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.701 [2024-07-15 15:30:32.293605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.701 [2024-07-15 15:30:32.293608] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236ec0) 00:25:22.701 [2024-07-15 15:30:32.293615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.701 [2024-07-15 15:30:32.293624] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba2c0, cid 3, qid 0 00:25:22.701 [2024-07-15 15:30:32.293831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.701 [2024-07-15 15:30:32.293838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.701 [2024-07-15 15:30:32.293841] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.701 [2024-07-15 15:30:32.293845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba2c0) on tqpair=0x2236ec0 00:25:22.701 [2024-07-15 15:30:32.293856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:22.701 [2024-07-15 15:30:32.293859] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:22.701 [2024-07-15 15:30:32.293863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236ec0) 00:25:22.701 [2024-07-15 15:30:32.293869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.701 [2024-07-15 15:30:32.293879] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22ba2c0, cid 3, qid 0 00:25:22.701 [2024-07-15 15:30:32.297892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:22.701 [2024-07-15 15:30:32.297900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:22.701 [2024-07-15 15:30:32.297904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:22.701 [2024-07-15 15:30:32.297907] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22ba2c0) on tqpair=0x2236ec0 00:25:22.701 [2024-07-15 15:30:32.297915] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:25:22.960 0% 00:25:22.960 Data Units Read: 0 00:25:22.960 Data Units Written: 0 00:25:22.960 Host Read Commands: 0 00:25:22.960 Host Write Commands: 0 00:25:22.960 Controller Busy Time: 0 minutes 00:25:22.960 Power Cycles: 0 00:25:22.960 Power On Hours: 0 hours 00:25:22.960 Unsafe Shutdowns: 0 00:25:22.960 Unrecoverable Media Errors: 0 00:25:22.960 Lifetime Error Log Entries: 0 00:25:22.960 Warning Temperature Time: 0 minutes 00:25:22.960 Critical Temperature Time: 0 minutes 00:25:22.960 00:25:22.960 Number of Queues 00:25:22.960 ================ 00:25:22.960 Number of I/O Submission Queues: 127 00:25:22.960 Number of I/O Completion Queues: 127 00:25:22.960 00:25:22.960 Active Namespaces 00:25:22.960 ================= 00:25:22.960 Namespace ID:1 00:25:22.960 Error Recovery Timeout: Unlimited 00:25:22.960 Command Set Identifier: NVM (00h) 00:25:22.960 Deallocate: Supported 00:25:22.960 Deallocated/Unwritten Error: Not Supported 00:25:22.960 Deallocated Read Value: Unknown 00:25:22.960 Deallocate in Write Zeroes: Not Supported 00:25:22.960 Deallocated Guard Field: 0xFFFF 00:25:22.960 Flush: Supported 00:25:22.960 Reservation: Supported 00:25:22.960 Namespace Sharing Capabilities: Multiple Controllers 00:25:22.960 Size (in LBAs): 131072 (0GiB) 00:25:22.960 Capacity (in LBAs): 131072 (0GiB) 00:25:22.960 Utilization (in LBAs): 131072 (0GiB) 00:25:22.960 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:22.960 EUI64: ABCDEF0123456789 00:25:22.961 UUID: d08fab12-b70f-4878-ad9b-a47873daa84e 00:25:22.961 Thin Provisioning: Not Supported 00:25:22.961 Per-NS Atomic Units: Yes 00:25:22.961 Atomic Boundary Size (Normal): 0 00:25:22.961 Atomic Boundary Size (PFail): 0 00:25:22.961 Atomic Boundary Offset: 0 00:25:22.961 Maximum Single Source Range Length: 65535 00:25:22.961 Maximum Copy Length: 65535 00:25:22.961 Maximum Source Range Count: 1 00:25:22.961 NGUID/EUI64 Never Reused: No 00:25:22.961 Namespace Write Protected: No 00:25:22.961 Number of LBA Formats: 1 00:25:22.961 Current LBA Format: LBA Format #00 00:25:22.961 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:22.961 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:22.961 rmmod nvme_tcp 00:25:22.961 rmmod nvme_fabrics 00:25:22.961 rmmod nvme_keyring 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 804484 ']' 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 804484 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 804484 ']' 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 804484 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 804484 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 804484' 00:25:22.961 killing process with pid 804484 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 804484 00:25:22.961 15:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 804484 00:25:23.221 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:23.221 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:23.221 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:23.221 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:23.221 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:23.221 15:30:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.221 15:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:23.221 15:30:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.130 15:30:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:25.130 00:25:25.130 real 0m11.325s 00:25:25.130 user 0m7.731s 00:25:25.130 sys 0m5.983s 00:25:25.130 15:30:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:25.130 15:30:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:25.130 ************************************ 00:25:25.130 END TEST nvmf_identify 00:25:25.130 ************************************ 00:25:25.130 15:30:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:25.130 15:30:34 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:25.130 15:30:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:25.130 15:30:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:25.130 15:30:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:25.130 ************************************ 00:25:25.130 START TEST nvmf_perf 00:25:25.130 ************************************ 00:25:25.130 15:30:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:25.390 * Looking for test storage... 00:25:25.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:25.390 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.391 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:25.391 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:25.391 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:25.391 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.391 15:30:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.391 15:30:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.391 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:25.391 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:25.391 15:30:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:25.391 15:30:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:33.533 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:33.533 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:33.533 Found net devices under 0000:31:00.0: cvl_0_0 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:33.533 Found net devices under 0000:31:00.1: cvl_0_1 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:33.533 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:33.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:25:33.534 00:25:33.534 --- 10.0.0.2 ping statistics --- 00:25:33.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.534 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:25:33.534 00:25:33.534 --- 10.0.0.1 ping statistics --- 00:25:33.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.534 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=809062 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 809062 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 809062 ']' 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:33.534 15:30:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:33.534 [2024-07-15 15:30:42.493628] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:25:33.534 [2024-07-15 15:30:42.493702] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.534 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.534 [2024-07-15 15:30:42.570336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:33.534 [2024-07-15 15:30:42.646206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.534 [2024-07-15 15:30:42.646245] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.534 [2024-07-15 15:30:42.646253] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.534 [2024-07-15 15:30:42.646259] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.534 [2024-07-15 15:30:42.646265] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.534 [2024-07-15 15:30:42.646372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.534 [2024-07-15 15:30:42.646493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:33.534 [2024-07-15 15:30:42.646652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.534 [2024-07-15 15:30:42.646653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:33.797 15:30:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:33.797 15:30:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:25:33.797 15:30:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:33.798 15:30:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:33.798 15:30:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:33.798 15:30:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.798 15:30:43 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:33.798 15:30:43 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:34.369 15:30:43 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:34.369 15:30:43 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:34.369 15:30:43 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:34.369 15:30:43 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:34.653 15:30:44 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:34.653 15:30:44 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:34.653 15:30:44 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:34.653 15:30:44 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:34.653 15:30:44 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:34.914 [2024-07-15 15:30:44.285299] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.914 15:30:44 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:34.914 15:30:44 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:34.914 15:30:44 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:35.175 15:30:44 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:35.175 15:30:44 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:35.434 15:30:44 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.434 [2024-07-15 15:30:44.971896] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.434 15:30:45 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:35.694 15:30:45 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:35.694 15:30:45 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:35.694 15:30:45 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:35.694 15:30:45 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:37.077 Initializing NVMe Controllers 00:25:37.077 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:37.077 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:37.077 Initialization complete. Launching workers. 00:25:37.077 ======================================================== 00:25:37.077 Latency(us) 00:25:37.078 Device Information : IOPS MiB/s Average min max 00:25:37.078 PCIE (0000:65:00.0) NSID 1 from core 0: 79731.84 311.45 400.64 13.39 5745.82 00:25:37.078 ======================================================== 00:25:37.078 Total : 79731.84 311.45 400.64 13.39 5745.82 00:25:37.078 00:25:37.078 15:30:46 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:37.078 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.459 Initializing NVMe Controllers 00:25:38.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:38.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:38.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:38.459 Initialization complete. Launching workers. 00:25:38.459 ======================================================== 00:25:38.459 Latency(us) 00:25:38.459 Device Information : IOPS MiB/s Average min max 00:25:38.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 102.00 0.40 10116.04 293.68 45894.77 00:25:38.459 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15221.35 4996.89 49865.20 00:25:38.459 ======================================================== 00:25:38.459 Total : 168.00 0.66 12121.70 293.68 49865.20 00:25:38.459 00:25:38.459 15:30:47 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:38.459 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.842 Initializing NVMe Controllers 00:25:39.842 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:39.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:39.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:39.842 Initialization complete. Launching workers. 00:25:39.842 ======================================================== 00:25:39.842 Latency(us) 00:25:39.842 Device Information : IOPS MiB/s Average min max 00:25:39.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10474.79 40.92 3055.07 413.41 6721.45 00:25:39.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3678.88 14.37 8746.19 6312.13 16304.94 00:25:39.842 ======================================================== 00:25:39.842 Total : 14153.66 55.29 4534.33 413.41 16304.94 00:25:39.842 00:25:39.842 15:30:49 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:39.842 15:30:49 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:39.842 15:30:49 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:39.842 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.138 Initializing NVMe Controllers 00:25:43.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:43.138 Controller IO queue size 128, less than required. 00:25:43.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:43.138 Controller IO queue size 128, less than required. 00:25:43.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:43.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:43.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:43.138 Initialization complete. Launching workers. 00:25:43.138 ======================================================== 00:25:43.138 Latency(us) 00:25:43.138 Device Information : IOPS MiB/s Average min max 00:25:43.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1940.90 485.22 66578.75 36122.88 128186.44 00:25:43.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 577.47 144.37 235464.46 63051.64 371855.12 00:25:43.138 ======================================================== 00:25:43.138 Total : 2518.37 629.59 105304.78 36122.88 371855.12 00:25:43.138 00:25:43.138 15:30:52 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:43.138 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.138 No valid NVMe controllers or AIO or URING devices found 00:25:43.138 Initializing NVMe Controllers 00:25:43.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:43.138 Controller IO queue size 128, less than required. 00:25:43.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:43.138 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:43.138 Controller IO queue size 128, less than required. 00:25:43.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:43.138 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:43.138 WARNING: Some requested NVMe devices were skipped 00:25:43.138 15:30:52 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:43.138 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.728 Initializing NVMe Controllers 00:25:45.728 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:45.728 Controller IO queue size 128, less than required. 00:25:45.728 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:45.728 Controller IO queue size 128, less than required. 00:25:45.728 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:45.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:45.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:45.728 Initialization complete. Launching workers. 00:25:45.728 00:25:45.728 ==================== 00:25:45.728 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:45.728 TCP transport: 00:25:45.728 polls: 24933 00:25:45.728 idle_polls: 12535 00:25:45.728 sock_completions: 12398 00:25:45.728 nvme_completions: 5869 00:25:45.728 submitted_requests: 8790 00:25:45.728 queued_requests: 1 00:25:45.728 00:25:45.728 ==================== 00:25:45.728 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:45.728 TCP transport: 00:25:45.728 polls: 25627 00:25:45.728 idle_polls: 11137 00:25:45.728 sock_completions: 14490 00:25:45.728 nvme_completions: 6107 00:25:45.728 submitted_requests: 9096 00:25:45.728 queued_requests: 1 00:25:45.728 ======================================================== 00:25:45.728 Latency(us) 00:25:45.728 Device Information : IOPS MiB/s Average min max 00:25:45.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1466.98 366.75 88960.94 42052.75 146180.42 00:25:45.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1526.48 381.62 85037.83 45210.75 144606.27 00:25:45.728 ======================================================== 00:25:45.728 Total : 2993.46 748.37 86960.39 42052.75 146180.42 00:25:45.728 00:25:45.728 15:30:55 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:45.728 15:30:55 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:45.728 15:30:55 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:45.728 15:30:55 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:45.728 15:30:55 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:45.729 rmmod nvme_tcp 00:25:45.729 rmmod nvme_fabrics 00:25:45.729 rmmod nvme_keyring 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 809062 ']' 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 809062 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 809062 ']' 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 809062 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 809062 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 809062' 00:25:45.729 killing process with pid 809062 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 809062 00:25:45.729 15:30:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 809062 00:25:48.268 15:30:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:48.268 15:30:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:48.268 15:30:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:48.268 15:30:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:48.268 15:30:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:48.268 15:30:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.268 15:30:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.268 15:30:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.178 15:30:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:50.178 00:25:50.178 real 0m24.630s 00:25:50.178 user 0m59.625s 00:25:50.178 sys 0m8.358s 00:25:50.178 15:30:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:50.178 15:30:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:50.178 ************************************ 00:25:50.178 END TEST nvmf_perf 00:25:50.178 ************************************ 00:25:50.178 15:30:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:50.178 15:30:59 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:50.178 15:30:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:50.178 15:30:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:50.178 15:30:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:50.178 ************************************ 00:25:50.178 START TEST nvmf_fio_host 00:25:50.178 ************************************ 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:50.178 * Looking for test storage... 00:25:50.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.178 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:50.179 15:30:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.319 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:58.320 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:58.320 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:58.320 Found net devices under 0000:31:00.0: cvl_0_0 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:58.320 Found net devices under 0000:31:00.1: cvl_0_1 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:58.320 15:31:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:58.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:25:58.320 00:25:58.320 --- 10.0.0.2 ping statistics --- 00:25:58.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.320 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:25:58.320 00:25:58.320 --- 10.0.0.1 ping statistics --- 00:25:58.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.320 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=816517 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 816517 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 816517 ']' 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:58.320 15:31:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.320 [2024-07-15 15:31:07.390931] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:25:58.320 [2024-07-15 15:31:07.390996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.320 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.320 [2024-07-15 15:31:07.467269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:58.320 [2024-07-15 15:31:07.541891] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.320 [2024-07-15 15:31:07.541930] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.320 [2024-07-15 15:31:07.541938] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.320 [2024-07-15 15:31:07.541945] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.320 [2024-07-15 15:31:07.541950] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.320 [2024-07-15 15:31:07.542080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.320 [2024-07-15 15:31:07.542219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.320 [2024-07-15 15:31:07.542381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.320 [2024-07-15 15:31:07.542382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:58.581 15:31:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:58.581 15:31:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:25:58.581 15:31:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:58.841 [2024-07-15 15:31:08.305818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.841 15:31:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:58.841 15:31:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:58.841 15:31:08 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.841 15:31:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:59.101 Malloc1 00:25:59.101 15:31:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:59.361 15:31:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:59.361 15:31:08 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.622 [2024-07-15 15:31:09.043359] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.622 15:31:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:59.903 15:31:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:59.903 15:31:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:59.904 15:31:09 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:00.166 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:00.166 fio-3.35 00:26:00.166 Starting 1 thread 00:26:00.166 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.701 00:26:02.702 test: (groupid=0, jobs=1): err= 0: pid=817194: Mon Jul 15 15:31:11 2024 00:26:02.702 read: IOPS=14.0k, BW=54.7MiB/s (57.4MB/s)(110MiB/2005msec) 00:26:02.702 slat (usec): min=2, max=279, avg= 2.18, stdev= 2.39 00:26:02.702 clat (usec): min=3355, max=10319, avg=5039.03, stdev=415.88 00:26:02.702 lat (usec): min=3357, max=10325, avg=5041.21, stdev=416.18 00:26:02.702 clat percentiles (usec): 00:26:02.702 | 1.00th=[ 4228], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4752], 00:26:02.702 | 30.00th=[ 4817], 40.00th=[ 4948], 50.00th=[ 5014], 60.00th=[ 5080], 00:26:02.702 | 70.00th=[ 5211], 80.00th=[ 5276], 90.00th=[ 5473], 95.00th=[ 5604], 00:26:02.702 | 99.00th=[ 5932], 99.50th=[ 6915], 99.90th=[ 9110], 99.95th=[ 9634], 00:26:02.702 | 99.99th=[10290] 00:26:02.702 bw ( KiB/s): min=54768, max=56528, per=99.97%, avg=56004.00, stdev=830.91, samples=4 00:26:02.702 iops : min=13692, max=14132, avg=14001.00, stdev=207.73, samples=4 00:26:02.702 write: IOPS=14.0k, BW=54.7MiB/s (57.4MB/s)(110MiB/2005msec); 0 zone resets 00:26:02.702 slat (usec): min=2, max=268, avg= 2.30, stdev= 1.79 00:26:02.702 clat (usec): min=2643, max=8350, avg=4073.87, stdev=357.72 00:26:02.702 lat (usec): min=2645, max=8356, avg=4076.16, stdev=358.08 00:26:02.702 clat percentiles (usec): 00:26:02.702 | 1.00th=[ 3392], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3851], 00:26:02.702 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4113], 00:26:02.702 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4490], 00:26:02.702 | 99.00th=[ 4883], 99.50th=[ 6063], 99.90th=[ 7767], 99.95th=[ 8094], 00:26:02.702 | 99.99th=[ 8291] 00:26:02.702 bw ( KiB/s): min=55224, max=56496, per=100.00%, avg=56074.00, stdev=575.68, samples=4 00:26:02.702 iops : min=13806, max=14124, avg=14018.50, stdev=143.92, samples=4 00:26:02.702 lat (msec) : 4=20.47%, 10=79.52%, 20=0.01% 00:26:02.702 cpu : usr=75.40%, sys=22.85%, ctx=16, majf=0, minf=7 00:26:02.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:02.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:02.702 issued rwts: total=28080,28086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.702 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:02.702 00:26:02.702 Run status group 0 (all jobs): 00:26:02.702 READ: bw=54.7MiB/s (57.4MB/s), 54.7MiB/s-54.7MiB/s (57.4MB/s-57.4MB/s), io=110MiB (115MB), run=2005-2005msec 00:26:02.702 WRITE: bw=54.7MiB/s (57.4MB/s), 54.7MiB/s-54.7MiB/s (57.4MB/s-57.4MB/s), io=110MiB (115MB), run=2005-2005msec 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:02.702 15:31:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:02.702 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:02.702 fio-3.35 00:26:02.702 Starting 1 thread 00:26:02.702 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.273 00:26:05.273 test: (groupid=0, jobs=1): err= 0: pid=817820: Mon Jul 15 15:31:14 2024 00:26:05.273 read: IOPS=9304, BW=145MiB/s (152MB/s)(292MiB/2009msec) 00:26:05.273 slat (usec): min=3, max=113, avg= 3.65, stdev= 1.70 00:26:05.273 clat (usec): min=2121, max=17159, avg=8453.93, stdev=1978.06 00:26:05.273 lat (usec): min=2124, max=17162, avg=8457.58, stdev=1978.26 00:26:05.273 clat percentiles (usec): 00:26:05.273 | 1.00th=[ 4555], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 6718], 00:26:05.273 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 8291], 60.00th=[ 8848], 00:26:05.273 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[10945], 95.00th=[11600], 00:26:05.273 | 99.00th=[13829], 99.50th=[14484], 99.90th=[15008], 99.95th=[15270], 00:26:05.273 | 99.99th=[16319] 00:26:05.273 bw ( KiB/s): min=70176, max=79744, per=49.62%, avg=73872.00, stdev=4279.27, samples=4 00:26:05.273 iops : min= 4386, max= 4984, avg=4617.00, stdev=267.45, samples=4 00:26:05.273 write: IOPS=5418, BW=84.7MiB/s (88.8MB/s)(151MiB/1779msec); 0 zone resets 00:26:05.273 slat (usec): min=40, max=450, avg=41.40, stdev= 9.43 00:26:05.273 clat (usec): min=3861, max=16822, avg=9529.44, stdev=1609.70 00:26:05.273 lat (usec): min=3901, max=16955, avg=9570.85, stdev=1612.45 00:26:05.273 clat percentiles (usec): 00:26:05.273 | 1.00th=[ 6521], 5.00th=[ 7439], 10.00th=[ 7832], 20.00th=[ 8291], 00:26:05.273 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:26:05.273 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11731], 95.00th=[12780], 00:26:05.273 | 99.00th=[14484], 99.50th=[15139], 99.90th=[16450], 99.95th=[16581], 00:26:05.273 | 99.99th=[16909] 00:26:05.273 bw ( KiB/s): min=71648, max=82778, per=88.45%, avg=76686.50, stdev=4796.50, samples=4 00:26:05.273 iops : min= 4478, max= 5173, avg=4792.75, stdev=299.52, samples=4 00:26:05.273 lat (msec) : 4=0.26%, 10=74.21%, 20=25.53% 00:26:05.273 cpu : usr=83.91%, sys=14.04%, ctx=16, majf=0, minf=14 00:26:05.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:05.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:05.273 issued rwts: total=18693,9640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:05.273 00:26:05.273 Run status group 0 (all jobs): 00:26:05.273 READ: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=292MiB (306MB), run=2009-2009msec 00:26:05.273 WRITE: bw=84.7MiB/s (88.8MB/s), 84.7MiB/s-84.7MiB/s (88.8MB/s-88.8MB/s), io=151MiB (158MB), run=1779-1779msec 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:05.273 rmmod nvme_tcp 00:26:05.273 rmmod nvme_fabrics 00:26:05.273 rmmod nvme_keyring 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 816517 ']' 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 816517 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 816517 ']' 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 816517 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 816517 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 816517' 00:26:05.273 killing process with pid 816517 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 816517 00:26:05.273 15:31:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 816517 00:26:05.533 15:31:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:05.533 15:31:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:05.533 15:31:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:05.533 15:31:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:05.533 15:31:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:05.533 15:31:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.533 15:31:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:05.533 15:31:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.076 15:31:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:08.076 00:26:08.076 real 0m17.632s 00:26:08.076 user 1m4.770s 00:26:08.076 sys 0m7.521s 00:26:08.076 15:31:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:08.076 15:31:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.076 ************************************ 00:26:08.076 END TEST nvmf_fio_host 00:26:08.076 ************************************ 00:26:08.076 15:31:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:08.076 15:31:17 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:08.076 15:31:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:08.076 15:31:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:08.076 15:31:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:08.076 ************************************ 00:26:08.076 START TEST nvmf_failover 00:26:08.076 ************************************ 00:26:08.076 15:31:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:08.076 * Looking for test storage... 00:26:08.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:26:08.077 15:31:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:16.255 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:16.255 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.255 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:16.256 Found net devices under 0000:31:00.0: cvl_0_0 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:16.256 Found net devices under 0000:31:00.1: cvl_0_1 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:16.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:26:16.256 00:26:16.256 --- 10.0.0.2 ping statistics --- 00:26:16.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.256 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:16.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:26:16.256 00:26:16.256 --- 10.0.0.1 ping statistics --- 00:26:16.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.256 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=822899 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 822899 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 822899 ']' 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:16.256 15:31:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:16.256 [2024-07-15 15:31:24.947311] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:26:16.256 [2024-07-15 15:31:24.947361] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.256 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.256 [2024-07-15 15:31:25.015393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:16.256 [2024-07-15 15:31:25.079212] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.256 [2024-07-15 15:31:25.079246] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.256 [2024-07-15 15:31:25.079253] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:16.256 [2024-07-15 15:31:25.079260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:16.256 [2024-07-15 15:31:25.079265] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.256 [2024-07-15 15:31:25.079366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.256 [2024-07-15 15:31:25.079521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.256 [2024-07-15 15:31:25.079522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:16.256 15:31:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:16.256 15:31:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:26:16.256 15:31:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:16.256 15:31:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:16.256 15:31:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:16.256 15:31:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.256 15:31:25 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:16.541 [2024-07-15 15:31:25.895104] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.541 15:31:25 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:16.541 Malloc0 00:26:16.541 15:31:26 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:16.801 15:31:26 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:17.061 15:31:26 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.061 [2024-07-15 15:31:26.588780] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.061 15:31:26 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:17.320 [2024-07-15 15:31:26.757212] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:17.320 15:31:26 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:17.320 [2024-07-15 15:31:26.925733] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:17.580 15:31:26 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:17.580 15:31:26 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=823266 00:26:17.580 15:31:26 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:17.580 15:31:26 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 823266 /var/tmp/bdevperf.sock 00:26:17.580 15:31:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 823266 ']' 00:26:17.580 15:31:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:17.580 15:31:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:17.580 15:31:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:17.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:17.580 15:31:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:17.580 15:31:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:18.518 15:31:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:18.518 15:31:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:26:18.518 15:31:27 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:18.518 NVMe0n1 00:26:18.518 15:31:28 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:18.778 00:26:18.778 15:31:28 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=823598 00:26:18.778 15:31:28 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:18.778 15:31:28 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:20.161 15:31:29 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.161 [2024-07-15 15:31:29.485798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485854] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485859] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485872] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485888] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485918] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485928] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485936] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485949] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485958] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485967] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485980] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.161 [2024-07-15 15:31:29.485989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.162 [2024-07-15 15:31:29.485993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.162 [2024-07-15 15:31:29.485997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe78ad0 is same with the state(5) to be set 00:26:20.162 15:31:29 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:23.457 15:31:32 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:23.457 00:26:23.457 15:31:32 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:23.457 [2024-07-15 15:31:32.927701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927756] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927904] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927910] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927916] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927941] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 [2024-07-15 15:31:32.927984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a1d0 is same with the state(5) to be set 00:26:23.457 15:31:32 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:26.752 15:31:35 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:26.752 [2024-07-15 15:31:36.111112] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.752 15:31:36 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:27.698 15:31:37 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:27.698 [2024-07-15 15:31:37.290605] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.698 [2024-07-15 15:31:37.290643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.698 [2024-07-15 15:31:37.290651] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.698 [2024-07-15 15:31:37.290657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.698 [2024-07-15 15:31:37.290664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290726] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290732] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290745] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.699 [2024-07-15 15:31:37.290825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7a8b0 is same with the state(5) to be set 00:26:27.960 15:31:37 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 823598 00:26:34.558 0 00:26:34.558 15:31:43 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 823266 00:26:34.558 15:31:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 823266 ']' 00:26:34.558 15:31:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 823266 00:26:34.558 15:31:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:34.558 15:31:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:34.558 15:31:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 823266 00:26:34.558 15:31:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:34.558 15:31:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:34.558 15:31:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 823266' 00:26:34.558 killing process with pid 823266 00:26:34.558 15:31:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 823266 00:26:34.558 15:31:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 823266 00:26:34.558 15:31:43 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:34.558 [2024-07-15 15:31:26.993085] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:26:34.558 [2024-07-15 15:31:26.993142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823266 ] 00:26:34.558 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.558 [2024-07-15 15:31:27.056190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.558 [2024-07-15 15:31:27.120444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.558 Running I/O for 15 seconds... 00:26:34.558 [2024-07-15 15:31:29.486278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.558 [2024-07-15 15:31:29.486311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.558 [2024-07-15 15:31:29.486328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.558 [2024-07-15 15:31:29.486336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.558 [2024-07-15 15:31:29.486346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.558 [2024-07-15 15:31:29.486353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.558 [2024-07-15 15:31:29.486362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.558 [2024-07-15 15:31:29.486370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.558 [2024-07-15 15:31:29.486379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.558 [2024-07-15 15:31:29.486386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.558 [2024-07-15 15:31:29.486395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.558 [2024-07-15 15:31:29.486401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.558 [2024-07-15 15:31:29.486410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.486919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.559 [2024-07-15 15:31:29.486936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.559 [2024-07-15 15:31:29.486952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.559 [2024-07-15 15:31:29.486968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.559 [2024-07-15 15:31:29.486984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.486993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.559 [2024-07-15 15:31:29.487000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.487009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.559 [2024-07-15 15:31:29.487016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.487025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.559 [2024-07-15 15:31:29.487032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.487041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.487048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.487057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.487064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.487073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.487080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.487089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.487096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.487105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.559 [2024-07-15 15:31:29.487114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.559 [2024-07-15 15:31:29.487126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.560 [2024-07-15 15:31:29.487811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.560 [2024-07-15 15:31:29.487820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.561 [2024-07-15 15:31:29.487827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.487836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.561 [2024-07-15 15:31:29.487843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.487852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.561 [2024-07-15 15:31:29.487858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.487867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.561 [2024-07-15 15:31:29.487874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.487887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.561 [2024-07-15 15:31:29.487895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.487904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.561 [2024-07-15 15:31:29.487911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.487920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.561 [2024-07-15 15:31:29.487927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.487936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.561 [2024-07-15 15:31:29.487945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.487955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.561 [2024-07-15 15:31:29.487962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.487970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.561 [2024-07-15 15:31:29.487977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.487986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.561 [2024-07-15 15:31:29.487993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98912 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98920 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98928 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98936 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98944 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98952 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98960 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98968 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98016 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98024 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98032 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98040 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98048 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98056 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98064 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98072 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98080 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98088 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.488476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.488483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.561 [2024-07-15 15:31:29.488488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.561 [2024-07-15 15:31:29.488494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98096 len:8 PRP1 0x0 PRP2 0x0 00:26:34.561 [2024-07-15 15:31:29.498643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.561 [2024-07-15 15:31:29.498676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.562 [2024-07-15 15:31:29.498684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.562 [2024-07-15 15:31:29.498692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98104 len:8 PRP1 0x0 PRP2 0x0 00:26:34.562 [2024-07-15 15:31:29.498700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:29.498708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.562 [2024-07-15 15:31:29.498713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.562 [2024-07-15 15:31:29.498719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98112 len:8 PRP1 0x0 PRP2 0x0 00:26:34.562 [2024-07-15 15:31:29.498726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:29.498737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.562 [2024-07-15 15:31:29.498744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.562 [2024-07-15 15:31:29.498750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98120 len:8 PRP1 0x0 PRP2 0x0 00:26:34.562 [2024-07-15 15:31:29.498757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:29.498764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.562 [2024-07-15 15:31:29.498769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.562 [2024-07-15 15:31:29.498775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98128 len:8 PRP1 0x0 PRP2 0x0 00:26:34.562 [2024-07-15 15:31:29.498782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:29.498789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.562 [2024-07-15 15:31:29.498795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.562 [2024-07-15 15:31:29.498801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98136 len:8 PRP1 0x0 PRP2 0x0 00:26:34.562 [2024-07-15 15:31:29.498808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:29.498815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.562 [2024-07-15 15:31:29.498820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.562 [2024-07-15 15:31:29.498826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98976 len:8 PRP1 0x0 PRP2 0x0 00:26:34.562 [2024-07-15 15:31:29.498833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:29.498871] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e3aed0 was disconnected and freed. reset controller. 00:26:34.562 [2024-07-15 15:31:29.498880] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:34.562 [2024-07-15 15:31:29.498924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.562 [2024-07-15 15:31:29.498933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:29.498942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.562 [2024-07-15 15:31:29.498949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:29.498957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.562 [2024-07-15 15:31:29.498964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:29.498972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.562 [2024-07-15 15:31:29.498979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:29.498992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.562 [2024-07-15 15:31:29.499023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3f0a0 (9): Bad file descriptor 00:26:34.562 [2024-07-15 15:31:29.502535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.562 [2024-07-15 15:31:29.625494] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:34.562 [2024-07-15 15:31:32.930032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.562 [2024-07-15 15:31:32.930367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.562 [2024-07-15 15:31:32.930376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.563 [2024-07-15 15:31:32.930384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.563 [2024-07-15 15:31:32.930401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.563 [2024-07-15 15:31:32.930418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.563 [2024-07-15 15:31:32.930843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.563 [2024-07-15 15:31:32.930860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.563 [2024-07-15 15:31:32.930876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.930991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.930998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.931007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.931014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.931023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.931030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.931039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.931046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.931055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.931062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.931071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.563 [2024-07-15 15:31:32.931078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.563 [2024-07-15 15:31:32.931087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.564 [2024-07-15 15:31:32.931403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.564 [2024-07-15 15:31:32.931429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109168 len:8 PRP1 0x0 PRP2 0x0 00:26:34.564 [2024-07-15 15:31:32.931437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.564 [2024-07-15 15:31:32.931452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.564 [2024-07-15 15:31:32.931458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109176 len:8 PRP1 0x0 PRP2 0x0 00:26:34.564 [2024-07-15 15:31:32.931464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.564 [2024-07-15 15:31:32.931477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.564 [2024-07-15 15:31:32.931483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109184 len:8 PRP1 0x0 PRP2 0x0 00:26:34.564 [2024-07-15 15:31:32.931490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.564 [2024-07-15 15:31:32.931503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.564 [2024-07-15 15:31:32.931508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109192 len:8 PRP1 0x0 PRP2 0x0 00:26:34.564 [2024-07-15 15:31:32.931515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.564 [2024-07-15 15:31:32.931527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.564 [2024-07-15 15:31:32.931535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109200 len:8 PRP1 0x0 PRP2 0x0 00:26:34.564 [2024-07-15 15:31:32.931542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.564 [2024-07-15 15:31:32.931554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.564 [2024-07-15 15:31:32.931560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109208 len:8 PRP1 0x0 PRP2 0x0 00:26:34.564 [2024-07-15 15:31:32.931567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.564 [2024-07-15 15:31:32.931580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.564 [2024-07-15 15:31:32.931585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109216 len:8 PRP1 0x0 PRP2 0x0 00:26:34.564 [2024-07-15 15:31:32.931593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.564 [2024-07-15 15:31:32.931606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.564 [2024-07-15 15:31:32.931612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109224 len:8 PRP1 0x0 PRP2 0x0 00:26:34.564 [2024-07-15 15:31:32.931619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.564 [2024-07-15 15:31:32.931631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.564 [2024-07-15 15:31:32.931637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109232 len:8 PRP1 0x0 PRP2 0x0 00:26:34.564 [2024-07-15 15:31:32.931645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.564 [2024-07-15 15:31:32.931658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.564 [2024-07-15 15:31:32.931664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109240 len:8 PRP1 0x0 PRP2 0x0 00:26:34.564 [2024-07-15 15:31:32.931671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.564 [2024-07-15 15:31:32.931684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.564 [2024-07-15 15:31:32.931690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109248 len:8 PRP1 0x0 PRP2 0x0 00:26:34.564 [2024-07-15 15:31:32.931697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.564 [2024-07-15 15:31:32.931709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.564 [2024-07-15 15:31:32.931715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109256 len:8 PRP1 0x0 PRP2 0x0 00:26:34.564 [2024-07-15 15:31:32.931722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.564 [2024-07-15 15:31:32.931737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.564 [2024-07-15 15:31:32.931743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109264 len:8 PRP1 0x0 PRP2 0x0 00:26:34.564 [2024-07-15 15:31:32.931750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.564 [2024-07-15 15:31:32.931757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.564 [2024-07-15 15:31:32.931763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.564 [2024-07-15 15:31:32.931769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109272 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.931776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.931783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.931789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.931795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109280 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.931802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.931809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.931815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.931821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109288 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.931828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.931835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.931841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.931846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109296 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.931853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.931860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.931865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.931871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109304 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.931878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.931888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.931894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.931900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109312 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.931907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.931914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.931919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.931925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109320 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.931933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.931940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.931946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.931952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109328 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.931959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.931966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.931972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.931977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109336 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.931984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.931992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.931997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109344 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.932010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.932017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.932022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109352 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.932035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.932042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.932047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109360 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.932060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.932068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.932073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109368 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.932086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.932093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.932098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109376 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.932111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.932118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.932124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109384 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.932138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.932145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.932151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109392 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.932164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.932172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.932177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109400 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.932190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.932198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.932203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109408 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.932216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.932223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.932228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109416 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.932241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.932249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.932254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109424 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.932267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.932274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.932279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109432 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.932292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.932299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.932305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109440 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.932317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.932324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.932331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109448 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.932344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.932351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.932356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109456 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.932369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.932376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.932382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109464 len:8 PRP1 0x0 PRP2 0x0 00:26:34.565 [2024-07-15 15:31:32.932395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.565 [2024-07-15 15:31:32.932402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.565 [2024-07-15 15:31:32.932407] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.565 [2024-07-15 15:31:32.932413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109472 len:8 PRP1 0x0 PRP2 0x0 00:26:34.566 [2024-07-15 15:31:32.932420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:32.932427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.566 [2024-07-15 15:31:32.942489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.566 [2024-07-15 15:31:32.942518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109480 len:8 PRP1 0x0 PRP2 0x0 00:26:34.566 [2024-07-15 15:31:32.942529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:32.942541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.566 [2024-07-15 15:31:32.942547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.566 [2024-07-15 15:31:32.942553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109488 len:8 PRP1 0x0 PRP2 0x0 00:26:34.566 [2024-07-15 15:31:32.942560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:32.942568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.566 [2024-07-15 15:31:32.942573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.566 [2024-07-15 15:31:32.942579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109496 len:8 PRP1 0x0 PRP2 0x0 00:26:34.566 [2024-07-15 15:31:32.942586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:32.942593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.566 [2024-07-15 15:31:32.942598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.566 [2024-07-15 15:31:32.942604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109504 len:8 PRP1 0x0 PRP2 0x0 00:26:34.566 [2024-07-15 15:31:32.942611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:32.942622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.566 [2024-07-15 15:31:32.942628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.566 [2024-07-15 15:31:32.942634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109512 len:8 PRP1 0x0 PRP2 0x0 00:26:34.566 [2024-07-15 15:31:32.942641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:32.942648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.566 [2024-07-15 15:31:32.942653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.566 [2024-07-15 15:31:32.942659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108688 len:8 PRP1 0x0 PRP2 0x0 00:26:34.566 [2024-07-15 15:31:32.942665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:32.942673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.566 [2024-07-15 15:31:32.942678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.566 [2024-07-15 15:31:32.942684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108696 len:8 PRP1 0x0 PRP2 0x0 00:26:34.566 [2024-07-15 15:31:32.942690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:32.942698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.566 [2024-07-15 15:31:32.942703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.566 [2024-07-15 15:31:32.942709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108704 len:8 PRP1 0x0 PRP2 0x0 00:26:34.566 [2024-07-15 15:31:32.942715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:32.942753] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e6bc40 was disconnected and freed. reset controller. 00:26:34.566 [2024-07-15 15:31:32.942762] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:34.566 [2024-07-15 15:31:32.942790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.566 [2024-07-15 15:31:32.942798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:32.942808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.566 [2024-07-15 15:31:32.942815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:32.942823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.566 [2024-07-15 15:31:32.942830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:32.942837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.566 [2024-07-15 15:31:32.942844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:32.942851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.566 [2024-07-15 15:31:32.942895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3f0a0 (9): Bad file descriptor 00:26:34.566 [2024-07-15 15:31:32.946392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.566 [2024-07-15 15:31:32.984763] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:34.566 [2024-07-15 15:31:37.292743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.292779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.292797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.292805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.292816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.292823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.292833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.292841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.292851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.292858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.292867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.292874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.292888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.292895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.292905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.292912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.292921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.292928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.292938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.292946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.292955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.292962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.292971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.292978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.292992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.293000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.293009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.293016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.293025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.293032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.293041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.293048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.293057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.293063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.293072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.293079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.293088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.293095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.293105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.293111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.293120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.293127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.566 [2024-07-15 15:31:37.293136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.566 [2024-07-15 15:31:37.293143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.567 [2024-07-15 15:31:37.293159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.567 [2024-07-15 15:31:37.293176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.567 [2024-07-15 15:31:37.293193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.567 [2024-07-15 15:31:37.293242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.567 [2024-07-15 15:31:37.293259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.567 [2024-07-15 15:31:37.293275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.567 [2024-07-15 15:31:37.293290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.567 [2024-07-15 15:31:37.293307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.567 [2024-07-15 15:31:37.293323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.567 [2024-07-15 15:31:37.293339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.567 [2024-07-15 15:31:37.293748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.567 [2024-07-15 15:31:37.293755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.293764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:37496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.293771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.293780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.293787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.293796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.293805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.293814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.293821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.293829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.293836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.293845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.293852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.293861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.293868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.293876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.293886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.293896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.293903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.293912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.293919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.293928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.293935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.293944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.293951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.293960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.293967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.293976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.293983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.293992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.293999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.294019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.294035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.294051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.294067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.294082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.294098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.568 [2024-07-15 15:31:37.294114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.568 [2024-07-15 15:31:37.294144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37672 len:8 PRP1 0x0 PRP2 0x0 00:26:34.568 [2024-07-15 15:31:37.294152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.568 [2024-07-15 15:31:37.294167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.568 [2024-07-15 15:31:37.294173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37680 len:8 PRP1 0x0 PRP2 0x0 00:26:34.568 [2024-07-15 15:31:37.294180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.568 [2024-07-15 15:31:37.294192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.568 [2024-07-15 15:31:37.294198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37688 len:8 PRP1 0x0 PRP2 0x0 00:26:34.568 [2024-07-15 15:31:37.294205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.568 [2024-07-15 15:31:37.294218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.568 [2024-07-15 15:31:37.294224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37696 len:8 PRP1 0x0 PRP2 0x0 00:26:34.568 [2024-07-15 15:31:37.294232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.568 [2024-07-15 15:31:37.294245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.568 [2024-07-15 15:31:37.294251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37704 len:8 PRP1 0x0 PRP2 0x0 00:26:34.568 [2024-07-15 15:31:37.294258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.568 [2024-07-15 15:31:37.294270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.568 [2024-07-15 15:31:37.294276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37712 len:8 PRP1 0x0 PRP2 0x0 00:26:34.568 [2024-07-15 15:31:37.294283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.568 [2024-07-15 15:31:37.294295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.568 [2024-07-15 15:31:37.294301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37720 len:8 PRP1 0x0 PRP2 0x0 00:26:34.568 [2024-07-15 15:31:37.294308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.568 [2024-07-15 15:31:37.294320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.568 [2024-07-15 15:31:37.294326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37728 len:8 PRP1 0x0 PRP2 0x0 00:26:34.568 [2024-07-15 15:31:37.294332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.568 [2024-07-15 15:31:37.294345] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.568 [2024-07-15 15:31:37.294351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37736 len:8 PRP1 0x0 PRP2 0x0 00:26:34.568 [2024-07-15 15:31:37.294357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.568 [2024-07-15 15:31:37.294370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.568 [2024-07-15 15:31:37.294377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37744 len:8 PRP1 0x0 PRP2 0x0 00:26:34.568 [2024-07-15 15:31:37.294383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.568 [2024-07-15 15:31:37.294396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.568 [2024-07-15 15:31:37.294402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37752 len:8 PRP1 0x0 PRP2 0x0 00:26:34.568 [2024-07-15 15:31:37.294409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.568 [2024-07-15 15:31:37.294421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.568 [2024-07-15 15:31:37.294428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37760 len:8 PRP1 0x0 PRP2 0x0 00:26:34.568 [2024-07-15 15:31:37.294435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.568 [2024-07-15 15:31:37.294443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37768 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37776 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37784 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37792 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37800 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37808 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37816 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37824 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37832 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37840 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37848 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37856 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37864 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37872 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37880 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37888 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37896 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37904 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37912 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37920 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37928 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.294979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.294984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.294990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37936 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.294996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.295004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.295009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.295014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37944 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.295022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.295030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.295035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.295041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37952 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.295047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.295054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.569 [2024-07-15 15:31:37.295060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.569 [2024-07-15 15:31:37.295065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37960 len:8 PRP1 0x0 PRP2 0x0 00:26:34.569 [2024-07-15 15:31:37.295072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.569 [2024-07-15 15:31:37.295080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.570 [2024-07-15 15:31:37.295086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.570 [2024-07-15 15:31:37.295091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37968 len:8 PRP1 0x0 PRP2 0x0 00:26:34.570 [2024-07-15 15:31:37.295098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.570 [2024-07-15 15:31:37.295105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.570 [2024-07-15 15:31:37.295110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.570 [2024-07-15 15:31:37.295116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37976 len:8 PRP1 0x0 PRP2 0x0 00:26:34.570 [2024-07-15 15:31:37.295122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.570 [2024-07-15 15:31:37.295130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.570 [2024-07-15 15:31:37.295135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.570 [2024-07-15 15:31:37.295140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37984 len:8 PRP1 0x0 PRP2 0x0 00:26:34.570 [2024-07-15 15:31:37.304871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.570 [2024-07-15 15:31:37.304919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.570 [2024-07-15 15:31:37.304926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.570 [2024-07-15 15:31:37.304935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37992 len:8 PRP1 0x0 PRP2 0x0 00:26:34.570 [2024-07-15 15:31:37.304942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.570 [2024-07-15 15:31:37.304950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.570 [2024-07-15 15:31:37.304956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.570 [2024-07-15 15:31:37.304962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38000 len:8 PRP1 0x0 PRP2 0x0 00:26:34.570 [2024-07-15 15:31:37.304969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.570 [2024-07-15 15:31:37.304976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.570 [2024-07-15 15:31:37.304981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.570 [2024-07-15 15:31:37.304991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38008 len:8 PRP1 0x0 PRP2 0x0 00:26:34.570 [2024-07-15 15:31:37.304998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.570 [2024-07-15 15:31:37.305005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.570 [2024-07-15 15:31:37.305011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.570 [2024-07-15 15:31:37.305017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38016 len:8 PRP1 0x0 PRP2 0x0 00:26:34.570 [2024-07-15 15:31:37.305023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.570 [2024-07-15 15:31:37.305030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.570 [2024-07-15 15:31:37.305036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.570 [2024-07-15 15:31:37.305042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38024 len:8 PRP1 0x0 PRP2 0x0 00:26:34.570 [2024-07-15 15:31:37.305049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.570 [2024-07-15 15:31:37.305056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:34.570 [2024-07-15 15:31:37.305061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:34.570 [2024-07-15 15:31:37.305067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38032 len:8 PRP1 0x0 PRP2 0x0 00:26:34.570 [2024-07-15 15:31:37.305074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.570 [2024-07-15 15:31:37.305112] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e6f170 was disconnected and freed. reset controller. 00:26:34.570 [2024-07-15 15:31:37.305121] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:34.570 [2024-07-15 15:31:37.305148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.570 [2024-07-15 15:31:37.305156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.570 [2024-07-15 15:31:37.305166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.570 [2024-07-15 15:31:37.305173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.570 [2024-07-15 15:31:37.305181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.570 [2024-07-15 15:31:37.305187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.570 [2024-07-15 15:31:37.305195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.570 [2024-07-15 15:31:37.305202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.570 [2024-07-15 15:31:37.305209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.570 [2024-07-15 15:31:37.305246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3f0a0 (9): Bad file descriptor 00:26:34.570 [2024-07-15 15:31:37.308746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.570 [2024-07-15 15:31:37.384639] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:34.570 00:26:34.570 Latency(us) 00:26:34.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.570 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:34.570 Verification LBA range: start 0x0 length 0x4000 00:26:34.570 NVMe0n1 : 15.00 9998.33 39.06 562.28 0.00 12092.54 515.41 23156.05 00:26:34.570 =================================================================================================================== 00:26:34.570 Total : 9998.33 39.06 562.28 0.00 12092.54 515.41 23156.05 00:26:34.570 Received shutdown signal, test time was about 15.000000 seconds 00:26:34.570 00:26:34.570 Latency(us) 00:26:34.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.570 =================================================================================================================== 00:26:34.570 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:34.570 15:31:43 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:34.570 15:31:43 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:34.570 15:31:43 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:34.570 15:31:43 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=826447 00:26:34.570 15:31:43 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 826447 /var/tmp/bdevperf.sock 00:26:34.570 15:31:43 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:34.570 15:31:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 826447 ']' 00:26:34.570 15:31:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:34.570 15:31:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:34.570 15:31:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:34.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:34.570 15:31:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:34.570 15:31:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:35.184 15:31:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:35.184 15:31:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:26:35.184 15:31:44 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:35.184 [2024-07-15 15:31:44.645162] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:35.184 15:31:44 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:35.468 [2024-07-15 15:31:44.805555] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:35.468 15:31:44 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:35.468 NVMe0n1 00:26:35.468 15:31:45 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:36.038 00:26:36.038 15:31:45 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:36.299 00:26:36.299 15:31:45 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:36.299 15:31:45 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:36.559 15:31:46 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:36.559 15:31:46 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:39.852 15:31:49 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:39.852 15:31:49 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:39.852 15:31:49 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=827631 00:26:39.852 15:31:49 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 827631 00:26:39.852 15:31:49 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:41.230 0 00:26:41.230 15:31:50 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:41.230 [2024-07-15 15:31:43.731587] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:26:41.230 [2024-07-15 15:31:43.731643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826447 ] 00:26:41.230 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.230 [2024-07-15 15:31:43.794939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.230 [2024-07-15 15:31:43.858311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.230 [2024-07-15 15:31:46.138430] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:41.230 [2024-07-15 15:31:46.138475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.230 [2024-07-15 15:31:46.138487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.230 [2024-07-15 15:31:46.138496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.230 [2024-07-15 15:31:46.138503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.230 [2024-07-15 15:31:46.138511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.230 [2024-07-15 15:31:46.138517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.230 [2024-07-15 15:31:46.138525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.230 [2024-07-15 15:31:46.138532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.230 [2024-07-15 15:31:46.138543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.230 [2024-07-15 15:31:46.138571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.230 [2024-07-15 15:31:46.138585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25260a0 (9): Bad file descriptor 00:26:41.230 [2024-07-15 15:31:46.192346] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:41.230 Running I/O for 1 seconds... 00:26:41.230 00:26:41.230 Latency(us) 00:26:41.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.230 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:41.230 Verification LBA range: start 0x0 length 0x4000 00:26:41.230 NVMe0n1 : 1.01 12976.29 50.69 0.00 0.00 9804.69 1310.72 11796.48 00:26:41.230 =================================================================================================================== 00:26:41.230 Total : 12976.29 50.69 0.00 0.00 9804.69 1310.72 11796.48 00:26:41.230 15:31:50 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:41.230 15:31:50 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:41.231 15:31:50 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:41.231 15:31:50 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:41.231 15:31:50 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:41.490 15:31:50 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:41.749 15:31:51 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:45.043 15:31:54 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:45.043 15:31:54 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:45.043 15:31:54 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 826447 00:26:45.043 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 826447 ']' 00:26:45.043 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 826447 00:26:45.043 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:45.043 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:45.043 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 826447 00:26:45.043 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:45.043 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:45.043 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 826447' 00:26:45.043 killing process with pid 826447 00:26:45.043 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 826447 00:26:45.043 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 826447 00:26:45.043 15:31:54 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:45.043 15:31:54 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:45.303 rmmod nvme_tcp 00:26:45.303 rmmod nvme_fabrics 00:26:45.303 rmmod nvme_keyring 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 822899 ']' 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 822899 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 822899 ']' 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 822899 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 822899 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 822899' 00:26:45.303 killing process with pid 822899 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 822899 00:26:45.303 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 822899 00:26:45.563 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:45.563 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:45.563 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:45.563 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:45.563 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:45.563 15:31:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.563 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.563 15:31:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.473 15:31:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:47.473 00:26:47.473 real 0m39.850s 00:26:47.473 user 2m2.045s 00:26:47.473 sys 0m8.319s 00:26:47.473 15:31:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:47.473 15:31:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:47.473 ************************************ 00:26:47.473 END TEST nvmf_failover 00:26:47.473 ************************************ 00:26:47.473 15:31:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:47.473 15:31:57 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:47.473 15:31:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:47.473 15:31:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:47.473 15:31:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:47.473 ************************************ 00:26:47.473 START TEST nvmf_host_discovery 00:26:47.473 ************************************ 00:26:47.473 15:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:47.734 * Looking for test storage... 00:26:47.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:47.734 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.735 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:47.735 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:47.735 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:47.735 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.735 15:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:47.735 15:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.735 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:47.735 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:47.735 15:31:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:26:47.735 15:31:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.873 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:55.873 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:55.874 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:55.874 Found net devices under 0000:31:00.0: cvl_0_0 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:55.874 Found net devices under 0000:31:00.1: cvl_0_1 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:55.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:26:55.874 00:26:55.874 --- 10.0.0.2 ping statistics --- 00:26:55.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.874 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:26:55.874 00:26:55.874 --- 10.0.0.1 ping statistics --- 00:26:55.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.874 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=833098 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 833098 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 833098 ']' 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:55.874 15:32:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.874 [2024-07-15 15:32:04.998989] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:26:55.874 [2024-07-15 15:32:04.999056] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.874 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.874 [2024-07-15 15:32:05.074693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.874 [2024-07-15 15:32:05.147559] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.874 [2024-07-15 15:32:05.147598] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.874 [2024-07-15 15:32:05.147606] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.874 [2024-07-15 15:32:05.147612] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.874 [2024-07-15 15:32:05.147618] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.874 [2024-07-15 15:32:05.147636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.446 [2024-07-15 15:32:05.813926] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.446 [2024-07-15 15:32:05.826067] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.446 null0 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.446 null1 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=833215 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 833215 /tmp/host.sock 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 833215 ']' 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:56.446 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:56.446 15:32:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.446 [2024-07-15 15:32:05.914578] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:26:56.446 [2024-07-15 15:32:05.914624] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid833215 ] 00:26:56.446 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.446 [2024-07-15 15:32:05.976247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.446 [2024-07-15 15:32:06.040724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:57.388 15:32:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.650 [2024-07-15 15:32:07.025190] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:26:57.650 15:32:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:58.222 [2024-07-15 15:32:07.691550] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:58.222 [2024-07-15 15:32:07.691574] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:58.222 [2024-07-15 15:32:07.691588] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:58.222 [2024-07-15 15:32:07.822024] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:58.483 [2024-07-15 15:32:07.883298] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:58.483 [2024-07-15 15:32:07.883322] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.744 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:59.021 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:59.022 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:59.022 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:59.022 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.022 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.022 [2024-07-15 15:32:08.621491] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:59.022 [2024-07-15 15:32:08.622138] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:59.022 [2024-07-15 15:32:08.622164] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:59.022 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.022 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:59.022 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:59.022 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:59.022 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:59.022 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:59.022 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:59.302 [2024-07-15 15:32:08.752996] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:59.302 15:32:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:59.302 [2024-07-15 15:32:08.817608] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:59.302 [2024-07-15 15:32:08.817625] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:59.302 [2024-07-15 15:32:08.817631] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.242 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.503 [2024-07-15 15:32:09.901877] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:00.503 [2024-07-15 15:32:09.901904] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:00.503 [2024-07-15 15:32:09.904152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:00.503 [2024-07-15 15:32:09.904169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.503 [2024-07-15 15:32:09.904179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:00.503 [2024-07-15 15:32:09.904191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.503 [2024-07-15 15:32:09.904198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:00.503 [2024-07-15 15:32:09.904205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.503 [2024-07-15 15:32:09.904213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:00.503 [2024-07-15 15:32:09.904220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.503 [2024-07-15 15:32:09.904227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee6bc0 is same with the state(5) to be set 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:00.503 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.504 [2024-07-15 15:32:09.914167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee6bc0 (9): Bad file descriptor 00:27:00.504 [2024-07-15 15:32:09.924206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:00.504 [2024-07-15 15:32:09.924510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.504 [2024-07-15 15:32:09.924525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee6bc0 with addr=10.0.0.2, port=4420 00:27:00.504 [2024-07-15 15:32:09.924533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee6bc0 is same with the state(5) to be set 00:27:00.504 [2024-07-15 15:32:09.924544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee6bc0 (9): Bad file descriptor 00:27:00.504 [2024-07-15 15:32:09.924555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:00.504 [2024-07-15 15:32:09.924562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:00.504 [2024-07-15 15:32:09.924570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:00.504 [2024-07-15 15:32:09.924581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.504 [2024-07-15 15:32:09.934264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:00.504 [2024-07-15 15:32:09.934572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.504 [2024-07-15 15:32:09.934584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee6bc0 with addr=10.0.0.2, port=4420 00:27:00.504 [2024-07-15 15:32:09.934595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee6bc0 is same with the state(5) to be set 00:27:00.504 [2024-07-15 15:32:09.934605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee6bc0 (9): Bad file descriptor 00:27:00.504 [2024-07-15 15:32:09.934622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:00.504 [2024-07-15 15:32:09.934629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:00.504 [2024-07-15 15:32:09.934636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:00.504 [2024-07-15 15:32:09.934646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.504 [2024-07-15 15:32:09.944313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:00.504 [2024-07-15 15:32:09.944576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.504 [2024-07-15 15:32:09.944587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee6bc0 with addr=10.0.0.2, port=4420 00:27:00.504 [2024-07-15 15:32:09.944594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee6bc0 is same with the state(5) to be set 00:27:00.504 [2024-07-15 15:32:09.944604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee6bc0 (9): Bad file descriptor 00:27:00.504 [2024-07-15 15:32:09.944614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:00.504 [2024-07-15 15:32:09.944621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:00.504 [2024-07-15 15:32:09.944628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:00.504 [2024-07-15 15:32:09.944638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.504 [2024-07-15 15:32:09.954365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:00.504 [2024-07-15 15:32:09.954590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.504 [2024-07-15 15:32:09.954602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee6bc0 with addr=10.0.0.2, port=4420 00:27:00.504 [2024-07-15 15:32:09.954610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee6bc0 is same with the state(5) to be set 00:27:00.504 [2024-07-15 15:32:09.954621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee6bc0 (9): Bad file descriptor 00:27:00.504 [2024-07-15 15:32:09.954631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:00.504 [2024-07-15 15:32:09.954637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:00.504 [2024-07-15 15:32:09.954644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:00.504 [2024-07-15 15:32:09.954654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.504 [2024-07-15 15:32:09.964418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:00.504 [2024-07-15 15:32:09.964727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.504 [2024-07-15 15:32:09.964739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee6bc0 with addr=10.0.0.2, port=4420 00:27:00.504 [2024-07-15 15:32:09.964746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee6bc0 is same with the state(5) to be set 00:27:00.504 [2024-07-15 15:32:09.964757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee6bc0 (9): Bad file descriptor 00:27:00.504 [2024-07-15 15:32:09.964767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:00.504 [2024-07-15 15:32:09.964773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:00.504 [2024-07-15 15:32:09.964780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:00.504 [2024-07-15 15:32:09.964790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:00.504 [2024-07-15 15:32:09.974469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:00.504 [2024-07-15 15:32:09.974779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.504 [2024-07-15 15:32:09.974790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee6bc0 with addr=10.0.0.2, port=4420 00:27:00.504 [2024-07-15 15:32:09.974797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee6bc0 is same with the state(5) to be set 00:27:00.504 [2024-07-15 15:32:09.974808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee6bc0 (9): Bad file descriptor 00:27:00.504 [2024-07-15 15:32:09.974831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:00.504 [2024-07-15 15:32:09.974839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:00.504 [2024-07-15 15:32:09.974845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:00.504 [2024-07-15 15:32:09.974856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.504 [2024-07-15 15:32:09.984522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:00.504 [2024-07-15 15:32:09.984834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.504 [2024-07-15 15:32:09.984846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee6bc0 with addr=10.0.0.2, port=4420 00:27:00.504 [2024-07-15 15:32:09.984852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee6bc0 is same with the state(5) to be set 00:27:00.504 [2024-07-15 15:32:09.984863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee6bc0 (9): Bad file descriptor 00:27:00.504 [2024-07-15 15:32:09.984879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:00.504 [2024-07-15 15:32:09.984891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:00.504 [2024-07-15 15:32:09.984898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:00.504 [2024-07-15 15:32:09.984912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.504 15:32:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.504 [2024-07-15 15:32:09.991907] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:00.504 [2024-07-15 15:32:09.991925] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:00.504 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.505 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.764 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.764 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:00.764 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:00.764 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:00.764 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:00.764 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:00.764 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.765 15:32:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.704 [2024-07-15 15:32:11.322003] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:01.704 [2024-07-15 15:32:11.322023] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:01.704 [2024-07-15 15:32:11.322035] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:01.964 [2024-07-15 15:32:11.451446] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:01.964 [2024-07-15 15:32:11.557482] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:01.964 [2024-07-15 15:32:11.557512] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:01.964 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.964 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:01.964 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:01.964 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:01.964 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:01.964 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:01.964 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:01.964 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:01.964 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:01.964 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.964 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.964 request: 00:27:01.964 { 00:27:01.964 "name": "nvme", 00:27:01.964 "trtype": "tcp", 00:27:01.964 "traddr": "10.0.0.2", 00:27:01.964 "adrfam": "ipv4", 00:27:01.964 "trsvcid": "8009", 00:27:01.965 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:01.965 "wait_for_attach": true, 00:27:01.965 "method": "bdev_nvme_start_discovery", 00:27:01.965 "req_id": 1 00:27:01.965 } 00:27:01.965 Got JSON-RPC error response 00:27:01.965 response: 00:27:01.965 { 00:27:01.965 "code": -17, 00:27:01.965 "message": "File exists" 00:27:01.965 } 00:27:01.965 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:01.965 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:01.965 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:01.965 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:01.965 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:01.965 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:01.965 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:01.965 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:01.965 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.965 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:01.965 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.965 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.225 request: 00:27:02.225 { 00:27:02.225 "name": "nvme_second", 00:27:02.225 "trtype": "tcp", 00:27:02.225 "traddr": "10.0.0.2", 00:27:02.225 "adrfam": "ipv4", 00:27:02.225 "trsvcid": "8009", 00:27:02.225 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:02.225 "wait_for_attach": true, 00:27:02.225 "method": "bdev_nvme_start_discovery", 00:27:02.225 "req_id": 1 00:27:02.225 } 00:27:02.225 Got JSON-RPC error response 00:27:02.225 response: 00:27:02.225 { 00:27:02.225 "code": -17, 00:27:02.225 "message": "File exists" 00:27:02.225 } 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.225 15:32:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:03.606 [2024-07-15 15:32:12.821195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.606 [2024-07-15 15:32:12.821234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f25000 with addr=10.0.0.2, port=8010 00:27:03.606 [2024-07-15 15:32:12.821248] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:03.606 [2024-07-15 15:32:12.821256] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:03.606 [2024-07-15 15:32:12.821263] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:04.546 [2024-07-15 15:32:13.823451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.546 [2024-07-15 15:32:13.823475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f25000 with addr=10.0.0.2, port=8010 00:27:04.546 [2024-07-15 15:32:13.823486] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:04.546 [2024-07-15 15:32:13.823493] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:04.546 [2024-07-15 15:32:13.823499] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:05.486 [2024-07-15 15:32:14.825425] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:05.486 request: 00:27:05.486 { 00:27:05.486 "name": "nvme_second", 00:27:05.486 "trtype": "tcp", 00:27:05.486 "traddr": "10.0.0.2", 00:27:05.486 "adrfam": "ipv4", 00:27:05.486 "trsvcid": "8010", 00:27:05.486 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:05.486 "wait_for_attach": false, 00:27:05.486 "attach_timeout_ms": 3000, 00:27:05.486 "method": "bdev_nvme_start_discovery", 00:27:05.486 "req_id": 1 00:27:05.486 } 00:27:05.486 Got JSON-RPC error response 00:27:05.486 response: 00:27:05.486 { 00:27:05.486 "code": -110, 00:27:05.486 "message": "Connection timed out" 00:27:05.486 } 00:27:05.486 15:32:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:05.486 15:32:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:27:05.486 15:32:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 833215 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:05.487 rmmod nvme_tcp 00:27:05.487 rmmod nvme_fabrics 00:27:05.487 rmmod nvme_keyring 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 833098 ']' 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 833098 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 833098 ']' 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 833098 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:05.487 15:32:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 833098 00:27:05.487 15:32:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:05.487 15:32:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:05.487 15:32:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 833098' 00:27:05.487 killing process with pid 833098 00:27:05.487 15:32:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 833098 00:27:05.487 15:32:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 833098 00:27:05.747 15:32:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:05.747 15:32:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:05.747 15:32:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:05.747 15:32:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.747 15:32:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:05.747 15:32:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.747 15:32:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.747 15:32:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.661 15:32:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:07.661 00:27:07.661 real 0m20.152s 00:27:07.661 user 0m23.072s 00:27:07.661 sys 0m7.084s 00:27:07.661 15:32:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:07.661 15:32:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:07.661 ************************************ 00:27:07.661 END TEST nvmf_host_discovery 00:27:07.661 ************************************ 00:27:07.661 15:32:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:07.661 15:32:17 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:07.661 15:32:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:07.661 15:32:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:07.661 15:32:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.923 ************************************ 00:27:07.923 START TEST nvmf_host_multipath_status 00:27:07.923 ************************************ 00:27:07.923 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:07.923 * Looking for test storage... 00:27:07.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:07.923 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.923 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:07.923 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.923 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.923 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.923 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.923 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.923 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.923 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.923 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.923 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.923 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.923 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:27:07.924 15:32:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:16.087 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:16.087 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:16.087 Found net devices under 0000:31:00.0: cvl_0_0 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:16.087 Found net devices under 0000:31:00.1: cvl_0_1 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:16.087 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:16.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:27:16.088 00:27:16.088 --- 10.0.0.2 ping statistics --- 00:27:16.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.088 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:27:16.088 00:27:16.088 --- 10.0.0.1 ping statistics --- 00:27:16.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.088 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=839631 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 839631 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 839631 ']' 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:16.088 15:32:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:16.088 [2024-07-15 15:32:25.038979] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:27:16.088 [2024-07-15 15:32:25.039043] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.088 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.088 [2024-07-15 15:32:25.114268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:16.088 [2024-07-15 15:32:25.187943] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.088 [2024-07-15 15:32:25.187981] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.088 [2024-07-15 15:32:25.187988] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.088 [2024-07-15 15:32:25.187995] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.088 [2024-07-15 15:32:25.188000] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.088 [2024-07-15 15:32:25.188127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.088 [2024-07-15 15:32:25.188254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.348 15:32:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:16.348 15:32:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:27:16.348 15:32:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:16.348 15:32:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:16.348 15:32:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:16.348 15:32:25 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.348 15:32:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=839631 00:27:16.348 15:32:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:16.608 [2024-07-15 15:32:25.979890] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.608 15:32:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:16.608 Malloc0 00:27:16.608 15:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:16.867 15:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:16.867 15:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:17.127 [2024-07-15 15:32:26.614743] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.127 15:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:17.387 [2024-07-15 15:32:26.783156] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:17.387 15:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=839991 00:27:17.387 15:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:17.387 15:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:17.387 15:32:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 839991 /var/tmp/bdevperf.sock 00:27:17.387 15:32:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 839991 ']' 00:27:17.387 15:32:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:17.387 15:32:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:17.387 15:32:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:17.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:17.387 15:32:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:17.387 15:32:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:18.328 15:32:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:18.328 15:32:27 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:27:18.328 15:32:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:18.328 15:32:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:18.588 Nvme0n1 00:27:18.588 15:32:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:19.165 Nvme0n1 00:27:19.165 15:32:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:19.165 15:32:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:21.156 15:32:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:21.156 15:32:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:21.156 15:32:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:21.415 15:32:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:22.353 15:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:22.353 15:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:22.353 15:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.353 15:32:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:22.613 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:22.613 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:22.613 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.613 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:22.873 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:22.873 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:22.873 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.873 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:22.873 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:22.873 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:22.873 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.873 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:23.134 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.134 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:23.134 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.134 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:23.134 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.134 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:23.393 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.394 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:23.394 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.394 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:23.394 15:32:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:23.654 15:32:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:23.654 15:32:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:25.036 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:25.036 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:25.036 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.036 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:25.036 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:25.036 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:25.036 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:25.036 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.036 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.036 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:25.036 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.036 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:25.295 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.295 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:25.295 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.295 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:25.295 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.295 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:25.295 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.295 15:32:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:25.554 15:32:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.554 15:32:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:25.554 15:32:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.554 15:32:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:25.814 15:32:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.814 15:32:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:25.814 15:32:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:25.814 15:32:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:26.073 15:32:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:27.017 15:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:27.017 15:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:27.017 15:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.017 15:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:27.277 15:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.277 15:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:27.277 15:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.277 15:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:27.537 15:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:27.537 15:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:27.537 15:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.537 15:32:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:27.537 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.537 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:27.537 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.537 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:27.796 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.796 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:27.796 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.796 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:28.056 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.056 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:28.056 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.056 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:28.056 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.056 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:28.056 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:28.317 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:28.317 15:32:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:29.696 15:32:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:29.696 15:32:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:29.696 15:32:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.696 15:32:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:29.696 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.696 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:29.696 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.696 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:29.696 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:29.696 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:29.696 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.696 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:29.956 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.956 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:29.956 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:29.956 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.216 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.216 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:30.216 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.217 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:30.217 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.217 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:30.217 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.217 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:30.477 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:30.477 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:30.477 15:32:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:30.736 15:32:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:30.736 15:32:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:31.676 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:31.676 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:31.676 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.676 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:31.943 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:31.943 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:31.943 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.943 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:32.205 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:32.205 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:32.205 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.205 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:32.205 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.205 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:32.205 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.205 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:32.465 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.465 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:32.465 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:32.465 15:32:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.725 15:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:32.725 15:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:32.725 15:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.725 15:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:32.725 15:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:32.725 15:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:32.725 15:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:32.984 15:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:33.245 15:32:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:34.187 15:32:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:34.187 15:32:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:34.187 15:32:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.187 15:32:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:34.187 15:32:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:34.187 15:32:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:34.187 15:32:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.187 15:32:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:34.446 15:32:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.446 15:32:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:34.446 15:32:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.447 15:32:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:34.706 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.706 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:34.706 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.706 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:34.706 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.706 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:34.706 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:34.706 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.966 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:34.966 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:34.966 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.966 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:35.249 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.249 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:35.249 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:35.249 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:35.544 15:32:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:35.544 15:32:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:36.925 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:36.925 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:36.925 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.925 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:36.925 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.925 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:36.925 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.925 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:36.925 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.925 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:36.925 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.925 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:37.185 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.185 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:37.185 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.185 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:37.445 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.445 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:37.445 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.445 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:37.445 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.445 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:37.445 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.445 15:32:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:37.705 15:32:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.705 15:32:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:37.705 15:32:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:37.705 15:32:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:37.965 15:32:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:38.906 15:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:38.906 15:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:38.906 15:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.906 15:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:39.166 15:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:39.166 15:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:39.166 15:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.166 15:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:39.426 15:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:39.426 15:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:39.426 15:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.426 15:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:39.426 15:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:39.426 15:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:39.426 15:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.426 15:32:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:39.686 15:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:39.686 15:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:39.686 15:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.686 15:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:39.946 15:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:39.946 15:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:39.946 15:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.946 15:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:39.946 15:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:39.946 15:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:39.946 15:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:40.206 15:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:40.466 15:32:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:41.413 15:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:41.413 15:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:41.413 15:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:41.413 15:32:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:41.413 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:41.413 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:41.413 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:41.413 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:41.674 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:41.674 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:41.674 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:41.674 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:41.935 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:41.935 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:41.935 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:41.935 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:41.935 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:41.935 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:41.935 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:41.935 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:42.196 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:42.196 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:42.196 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.196 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:42.456 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:42.456 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:42.456 15:32:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:42.456 15:32:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:42.717 15:32:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:43.660 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:43.660 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:43.660 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.660 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:43.921 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.921 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:43.921 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.921 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:44.181 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:44.181 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:44.181 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:44.181 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:44.181 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:44.181 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:44.181 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:44.181 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:44.441 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:44.441 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:44.441 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:44.441 15:32:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:44.702 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:44.702 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:44.702 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:44.702 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:44.702 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:44.702 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 839991 00:27:44.702 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 839991 ']' 00:27:44.702 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 839991 00:27:44.702 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:27:44.702 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:44.702 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 839991 00:27:44.702 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:27:44.702 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:27:44.702 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 839991' 00:27:44.702 killing process with pid 839991 00:27:44.702 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 839991 00:27:44.702 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 839991 00:27:44.969 Connection closed with partial response: 00:27:44.970 00:27:44.970 00:27:44.970 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 839991 00:27:44.970 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:44.970 [2024-07-15 15:32:26.855649] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:27:44.970 [2024-07-15 15:32:26.855714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid839991 ] 00:27:44.970 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.970 [2024-07-15 15:32:26.910122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.970 [2024-07-15 15:32:26.962093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.970 Running I/O for 90 seconds... 00:27:44.970 [2024-07-15 15:32:40.100119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.100156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.100195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.100211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.100226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.100241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.100256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.100271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.100286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.100714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.100731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.100751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.100767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.100782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.970 [2024-07-15 15:32:40.100798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.970 [2024-07-15 15:32:40.100813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.970 [2024-07-15 15:32:40.100829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.970 [2024-07-15 15:32:40.100844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.970 [2024-07-15 15:32:40.100860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.100875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.100895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.100906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.100911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.101130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.101137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.101148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.101155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.101167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.101172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.101183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.101188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.101199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.101204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.101215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.101220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.101231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.101236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.101248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.101252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.101486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.101492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.101504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.101509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.101521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.101526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.101537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.101543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.101554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.101559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.101571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.101576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.101588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.101594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:44.970 [2024-07-15 15:32:40.101606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.970 [2024-07-15 15:32:40.101611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.101941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.971 [2024-07-15 15:32:40.101948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.101961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.971 [2024-07-15 15:32:40.101966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.101978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.971 [2024-07-15 15:32:40.101983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.101995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.971 [2024-07-15 15:32:40.102000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.102013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.971 [2024-07-15 15:32:40.102018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.102031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.971 [2024-07-15 15:32:40.102036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.102048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.971 [2024-07-15 15:32:40.102053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.102066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.971 [2024-07-15 15:32:40.102071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.971 [2024-07-15 15:32:40.104152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.971 [2024-07-15 15:32:40.104171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.971 [2024-07-15 15:32:40.104193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.971 [2024-07-15 15:32:40.104512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.971 [2024-07-15 15:32:40.104776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:44.971 [2024-07-15 15:32:40.104790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.104795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.104809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.104814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.104828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.104833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.104848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.104852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.104866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.104872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.104889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.104895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.104909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.104913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.104927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.104933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.104946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.104951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.104965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.104970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.104983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.104988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.972 [2024-07-15 15:32:40.105660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:44.972 [2024-07-15 15:32:40.105677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.973 [2024-07-15 15:32:40.105682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:40.105700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.973 [2024-07-15 15:32:40.105705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:40.105721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.973 [2024-07-15 15:32:40.105726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:40.105743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.973 [2024-07-15 15:32:40.105748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:40.105764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.973 [2024-07-15 15:32:40.105769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:40.105786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.973 [2024-07-15 15:32:40.105791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:40.105807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.973 [2024-07-15 15:32:40.105812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:40.105828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.973 [2024-07-15 15:32:40.105834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:40.105850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.973 [2024-07-15 15:32:40.105855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:53144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.188721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.188726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.189524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.189536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.189547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.189553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.189563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.189568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.189578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:53240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.189584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.189594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.189599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.189609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:53272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.189615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.189625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.189630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.189643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.189648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.189658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.189663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.189673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.189679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.189689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.973 [2024-07-15 15:32:52.189694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:44.973 [2024-07-15 15:32:52.189704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.189709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.189719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.189724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.189734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.189740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.189749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.189754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.189764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.189769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.189779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.189784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.189933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.189942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.189953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.189959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.189971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.189976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.189987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.189992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.190007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.190023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.190040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.190056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.190072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.974 [2024-07-15 15:32:52.190087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.974 [2024-07-15 15:32:52.190102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.974 [2024-07-15 15:32:52.190117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.190132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.190146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.190162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.190177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.190192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:53688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.190207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.190222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.974 [2024-07-15 15:32:52.190237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.974 [2024-07-15 15:32:52.190252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.974 [2024-07-15 15:32:52.190267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.190282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:44.974 [2024-07-15 15:32:52.190292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.974 [2024-07-15 15:32:52.190296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:44.974 Received shutdown signal, test time was about 25.643274 seconds 00:27:44.974 00:27:44.974 Latency(us) 00:27:44.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.974 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:44.974 Verification LBA range: start 0x0 length 0x4000 00:27:44.974 Nvme0n1 : 25.64 9379.13 36.64 0.00 0.00 13628.78 336.21 3019898.88 00:27:44.974 =================================================================================================================== 00:27:44.974 Total : 9379.13 36.64 0.00 0.00 13628.78 336.21 3019898.88 00:27:44.974 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:45.235 rmmod nvme_tcp 00:27:45.235 rmmod nvme_fabrics 00:27:45.235 rmmod nvme_keyring 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 839631 ']' 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 839631 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 839631 ']' 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 839631 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 839631 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 839631' 00:27:45.235 killing process with pid 839631 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 839631 00:27:45.235 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 839631 00:27:45.496 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:45.496 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:45.496 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:45.496 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:45.496 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:45.496 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.496 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:45.496 15:32:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.407 15:32:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:47.407 00:27:47.407 real 0m39.632s 00:27:47.407 user 1m41.596s 00:27:47.407 sys 0m10.765s 00:27:47.407 15:32:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:47.407 15:32:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:47.407 ************************************ 00:27:47.407 END TEST nvmf_host_multipath_status 00:27:47.407 ************************************ 00:27:47.407 15:32:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:47.407 15:32:56 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:47.407 15:32:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:47.407 15:32:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.407 15:32:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:47.407 ************************************ 00:27:47.407 START TEST nvmf_discovery_remove_ifc 00:27:47.407 ************************************ 00:27:47.407 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:47.668 * Looking for test storage... 00:27:47.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:27:47.668 15:32:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:55.813 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:55.813 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:55.813 Found net devices under 0000:31:00.0: cvl_0_0 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:55.813 Found net devices under 0000:31:00.1: cvl_0_1 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:55.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:55.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.757 ms 00:27:55.813 00:27:55.813 --- 10.0.0.2 ping statistics --- 00:27:55.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.813 rtt min/avg/max/mdev = 0.757/0.757/0.757/0.000 ms 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:55.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:55.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:27:55.813 00:27:55.813 --- 10.0.0.1 ping statistics --- 00:27:55.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.813 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=850182 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 850182 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 850182 ']' 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:55.813 15:33:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:55.813 [2024-07-15 15:33:04.872316] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:27:55.813 [2024-07-15 15:33:04.872381] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:55.813 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.813 [2024-07-15 15:33:04.947702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.813 [2024-07-15 15:33:05.019618] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.813 [2024-07-15 15:33:05.019657] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.813 [2024-07-15 15:33:05.019665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:55.813 [2024-07-15 15:33:05.019671] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:55.813 [2024-07-15 15:33:05.019676] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.813 [2024-07-15 15:33:05.019695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.080 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:56.080 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:56.080 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:56.080 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:56.080 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:56.080 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:56.080 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:56.080 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.080 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:56.341 [2024-07-15 15:33:05.710019] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:56.341 [2024-07-15 15:33:05.718152] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:56.341 null0 00:27:56.341 [2024-07-15 15:33:05.750161] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.341 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.341 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=850232 00:27:56.341 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 850232 /tmp/host.sock 00:27:56.341 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:56.341 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 850232 ']' 00:27:56.341 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:56.341 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:56.341 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:56.341 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:56.341 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:56.341 15:33:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:56.341 [2024-07-15 15:33:05.823120] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:27:56.341 [2024-07-15 15:33:05.823165] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850232 ] 00:27:56.341 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.341 [2024-07-15 15:33:05.884653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.341 [2024-07-15 15:33:05.949532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.282 15:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:57.282 15:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:57.282 15:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:57.282 15:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:57.282 15:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.282 15:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:57.282 15:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.282 15:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:57.282 15:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.282 15:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:57.282 15:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.282 15:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:57.282 15:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.282 15:33:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:58.221 [2024-07-15 15:33:07.710920] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:58.221 [2024-07-15 15:33:07.710941] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:58.221 [2024-07-15 15:33:07.710953] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:58.221 [2024-07-15 15:33:07.840368] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:58.501 [2024-07-15 15:33:08.024181] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:58.501 [2024-07-15 15:33:08.024231] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:58.501 [2024-07-15 15:33:08.024255] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:58.501 [2024-07-15 15:33:08.024269] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:58.501 [2024-07-15 15:33:08.024289] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:58.501 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.501 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:58.501 [2024-07-15 15:33:08.027867] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1055800 was disconnected and freed. delete nvme_qpair. 00:27:58.501 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:58.501 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:58.501 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:58.501 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.501 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:58.501 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:58.501 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:58.501 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.501 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:58.501 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:58.501 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:58.761 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:58.761 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:58.761 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:58.761 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:58.761 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.761 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:58.761 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:58.761 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:58.761 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.761 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:58.761 15:33:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:59.701 15:33:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:59.701 15:33:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:59.701 15:33:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:59.701 15:33:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.701 15:33:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:59.701 15:33:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:59.701 15:33:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:59.701 15:33:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.961 15:33:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:59.961 15:33:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:00.901 15:33:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:00.901 15:33:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:00.901 15:33:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:00.901 15:33:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.901 15:33:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:00.901 15:33:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:00.901 15:33:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:00.901 15:33:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.901 15:33:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:00.901 15:33:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:01.843 15:33:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:01.843 15:33:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:01.843 15:33:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:01.843 15:33:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:01.843 15:33:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:01.843 15:33:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:01.843 15:33:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:01.843 15:33:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:01.843 15:33:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:01.843 15:33:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:03.229 15:33:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:03.229 15:33:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:03.229 15:33:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:03.229 15:33:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.229 15:33:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:03.229 15:33:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:03.229 15:33:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:03.229 15:33:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.229 15:33:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:03.229 15:33:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:04.171 [2024-07-15 15:33:13.464738] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:04.171 [2024-07-15 15:33:13.464784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.171 [2024-07-15 15:33:13.464796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.171 [2024-07-15 15:33:13.464805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.171 [2024-07-15 15:33:13.464813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.171 [2024-07-15 15:33:13.464821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.171 [2024-07-15 15:33:13.464828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.171 [2024-07-15 15:33:13.464836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.171 [2024-07-15 15:33:13.464843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.171 [2024-07-15 15:33:13.464852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.172 [2024-07-15 15:33:13.464858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.172 [2024-07-15 15:33:13.464866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101c300 is same with the state(5) to be set 00:28:04.172 [2024-07-15 15:33:13.474757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101c300 (9): Bad file descriptor 00:28:04.172 [2024-07-15 15:33:13.484796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:04.172 15:33:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:04.172 15:33:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:04.172 15:33:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:04.172 15:33:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.172 15:33:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:04.172 15:33:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:04.172 15:33:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:05.113 [2024-07-15 15:33:14.496908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:05.113 [2024-07-15 15:33:14.496945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101c300 with addr=10.0.0.2, port=4420 00:28:05.113 [2024-07-15 15:33:14.496956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101c300 is same with the state(5) to be set 00:28:05.113 [2024-07-15 15:33:14.496977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101c300 (9): Bad file descriptor 00:28:05.113 [2024-07-15 15:33:14.497336] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:05.113 [2024-07-15 15:33:14.497353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:05.113 [2024-07-15 15:33:14.497360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:05.113 [2024-07-15 15:33:14.497369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:05.113 [2024-07-15 15:33:14.497383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:05.113 [2024-07-15 15:33:14.497391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:05.113 15:33:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.113 15:33:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:05.113 15:33:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:06.054 [2024-07-15 15:33:15.499764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:06.054 [2024-07-15 15:33:15.499783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:06.054 [2024-07-15 15:33:15.499791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:06.054 [2024-07-15 15:33:15.499798] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:28:06.054 [2024-07-15 15:33:15.499809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:06.054 [2024-07-15 15:33:15.499827] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:06.054 [2024-07-15 15:33:15.499847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.054 [2024-07-15 15:33:15.499857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.054 [2024-07-15 15:33:15.499866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.054 [2024-07-15 15:33:15.499873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.054 [2024-07-15 15:33:15.499881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.054 [2024-07-15 15:33:15.499892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.054 [2024-07-15 15:33:15.499900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.054 [2024-07-15 15:33:15.499912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.054 [2024-07-15 15:33:15.499920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.054 [2024-07-15 15:33:15.499927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.054 [2024-07-15 15:33:15.499934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:06.054 [2024-07-15 15:33:15.500269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x101b780 (9): Bad file descriptor 00:28:06.054 [2024-07-15 15:33:15.501280] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:06.054 [2024-07-15 15:33:15.501291] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:06.054 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:06.054 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:06.054 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:06.054 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.054 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:06.054 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:06.054 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:06.054 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.054 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:06.054 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:06.054 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:06.314 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:06.314 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:06.314 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:06.314 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:06.314 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.314 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:06.314 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:06.314 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:06.314 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.314 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:06.314 15:33:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:07.254 15:33:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:07.254 15:33:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:07.254 15:33:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:07.254 15:33:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.254 15:33:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:07.254 15:33:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:07.254 15:33:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:07.254 15:33:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.254 15:33:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:07.254 15:33:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:08.192 [2024-07-15 15:33:17.555098] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:08.192 [2024-07-15 15:33:17.555118] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:08.192 [2024-07-15 15:33:17.555131] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:08.192 [2024-07-15 15:33:17.643411] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:08.192 [2024-07-15 15:33:17.746179] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:08.192 [2024-07-15 15:33:17.746214] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:08.192 [2024-07-15 15:33:17.746234] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:08.192 [2024-07-15 15:33:17.746247] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:08.192 [2024-07-15 15:33:17.746254] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:08.192 [2024-07-15 15:33:17.752378] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1022f70 was disconnected and freed. delete nvme_qpair. 00:28:08.192 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:08.192 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:08.192 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:08.192 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.192 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:08.192 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:08.192 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:08.451 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.451 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:08.451 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:08.451 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 850232 00:28:08.451 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 850232 ']' 00:28:08.451 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 850232 00:28:08.451 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:28:08.451 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:08.451 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 850232 00:28:08.451 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:08.451 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:08.451 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 850232' 00:28:08.451 killing process with pid 850232 00:28:08.451 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 850232 00:28:08.451 15:33:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 850232 00:28:08.451 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:08.452 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:08.452 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:28:08.452 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:08.452 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:28:08.452 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:08.452 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:08.452 rmmod nvme_tcp 00:28:08.452 rmmod nvme_fabrics 00:28:08.452 rmmod nvme_keyring 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 850182 ']' 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 850182 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 850182 ']' 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 850182 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 850182 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 850182' 00:28:08.711 killing process with pid 850182 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 850182 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 850182 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.711 15:33:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.253 15:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:11.253 00:28:11.253 real 0m23.335s 00:28:11.253 user 0m27.407s 00:28:11.253 sys 0m6.788s 00:28:11.253 15:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:11.253 15:33:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:11.253 ************************************ 00:28:11.253 END TEST nvmf_discovery_remove_ifc 00:28:11.253 ************************************ 00:28:11.253 15:33:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:11.253 15:33:20 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:11.253 15:33:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:11.253 15:33:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:11.253 15:33:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:11.253 ************************************ 00:28:11.253 START TEST nvmf_identify_kernel_target 00:28:11.253 ************************************ 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:11.253 * Looking for test storage... 00:28:11.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:28:11.253 15:33:20 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:19.487 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:19.487 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:19.487 Found net devices under 0000:31:00.0: cvl_0_0 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:19.487 Found net devices under 0000:31:00.1: cvl_0_1 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.487 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:19.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:28:19.488 00:28:19.488 --- 10.0.0.2 ping statistics --- 00:28:19.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.488 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:28:19.488 00:28:19.488 --- 10.0.0.1 ping statistics --- 00:28:19.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.488 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:19.488 15:33:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:22.788 Waiting for block devices as requested 00:28:22.788 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:22.788 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:22.788 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:22.788 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:22.788 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:23.048 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:23.048 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:23.048 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:23.308 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:23.308 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:23.568 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:23.568 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:23.568 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:23.568 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:23.827 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:23.827 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:23.827 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:23.827 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:23.827 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:23.827 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:23.827 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:23.827 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:23.827 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:23.827 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:23.827 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:23.827 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:23.827 No valid GPT data, bailing 00:28:23.827 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:24.088 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:28:24.088 00:28:24.088 Discovery Log Number of Records 2, Generation counter 2 00:28:24.088 =====Discovery Log Entry 0====== 00:28:24.088 trtype: tcp 00:28:24.088 adrfam: ipv4 00:28:24.088 subtype: current discovery subsystem 00:28:24.088 treq: not specified, sq flow control disable supported 00:28:24.088 portid: 1 00:28:24.088 trsvcid: 4420 00:28:24.088 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:24.088 traddr: 10.0.0.1 00:28:24.088 eflags: none 00:28:24.088 sectype: none 00:28:24.088 =====Discovery Log Entry 1====== 00:28:24.088 trtype: tcp 00:28:24.088 adrfam: ipv4 00:28:24.088 subtype: nvme subsystem 00:28:24.089 treq: not specified, sq flow control disable supported 00:28:24.089 portid: 1 00:28:24.089 trsvcid: 4420 00:28:24.089 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:24.089 traddr: 10.0.0.1 00:28:24.089 eflags: none 00:28:24.089 sectype: none 00:28:24.089 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:24.089 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:24.089 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.089 ===================================================== 00:28:24.089 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:24.089 ===================================================== 00:28:24.089 Controller Capabilities/Features 00:28:24.089 ================================ 00:28:24.089 Vendor ID: 0000 00:28:24.089 Subsystem Vendor ID: 0000 00:28:24.089 Serial Number: 6006366741b233e2ca09 00:28:24.089 Model Number: Linux 00:28:24.089 Firmware Version: 6.7.0-68 00:28:24.089 Recommended Arb Burst: 0 00:28:24.089 IEEE OUI Identifier: 00 00 00 00:28:24.089 Multi-path I/O 00:28:24.089 May have multiple subsystem ports: No 00:28:24.089 May have multiple controllers: No 00:28:24.089 Associated with SR-IOV VF: No 00:28:24.089 Max Data Transfer Size: Unlimited 00:28:24.089 Max Number of Namespaces: 0 00:28:24.089 Max Number of I/O Queues: 1024 00:28:24.089 NVMe Specification Version (VS): 1.3 00:28:24.089 NVMe Specification Version (Identify): 1.3 00:28:24.089 Maximum Queue Entries: 1024 00:28:24.089 Contiguous Queues Required: No 00:28:24.089 Arbitration Mechanisms Supported 00:28:24.089 Weighted Round Robin: Not Supported 00:28:24.089 Vendor Specific: Not Supported 00:28:24.089 Reset Timeout: 7500 ms 00:28:24.089 Doorbell Stride: 4 bytes 00:28:24.089 NVM Subsystem Reset: Not Supported 00:28:24.089 Command Sets Supported 00:28:24.089 NVM Command Set: Supported 00:28:24.089 Boot Partition: Not Supported 00:28:24.089 Memory Page Size Minimum: 4096 bytes 00:28:24.089 Memory Page Size Maximum: 4096 bytes 00:28:24.089 Persistent Memory Region: Not Supported 00:28:24.089 Optional Asynchronous Events Supported 00:28:24.089 Namespace Attribute Notices: Not Supported 00:28:24.089 Firmware Activation Notices: Not Supported 00:28:24.089 ANA Change Notices: Not Supported 00:28:24.089 PLE Aggregate Log Change Notices: Not Supported 00:28:24.089 LBA Status Info Alert Notices: Not Supported 00:28:24.089 EGE Aggregate Log Change Notices: Not Supported 00:28:24.089 Normal NVM Subsystem Shutdown event: Not Supported 00:28:24.089 Zone Descriptor Change Notices: Not Supported 00:28:24.089 Discovery Log Change Notices: Supported 00:28:24.089 Controller Attributes 00:28:24.089 128-bit Host Identifier: Not Supported 00:28:24.089 Non-Operational Permissive Mode: Not Supported 00:28:24.089 NVM Sets: Not Supported 00:28:24.089 Read Recovery Levels: Not Supported 00:28:24.089 Endurance Groups: Not Supported 00:28:24.089 Predictable Latency Mode: Not Supported 00:28:24.089 Traffic Based Keep ALive: Not Supported 00:28:24.089 Namespace Granularity: Not Supported 00:28:24.089 SQ Associations: Not Supported 00:28:24.089 UUID List: Not Supported 00:28:24.089 Multi-Domain Subsystem: Not Supported 00:28:24.089 Fixed Capacity Management: Not Supported 00:28:24.089 Variable Capacity Management: Not Supported 00:28:24.089 Delete Endurance Group: Not Supported 00:28:24.089 Delete NVM Set: Not Supported 00:28:24.089 Extended LBA Formats Supported: Not Supported 00:28:24.089 Flexible Data Placement Supported: Not Supported 00:28:24.089 00:28:24.089 Controller Memory Buffer Support 00:28:24.089 ================================ 00:28:24.089 Supported: No 00:28:24.089 00:28:24.089 Persistent Memory Region Support 00:28:24.089 ================================ 00:28:24.089 Supported: No 00:28:24.089 00:28:24.089 Admin Command Set Attributes 00:28:24.089 ============================ 00:28:24.089 Security Send/Receive: Not Supported 00:28:24.089 Format NVM: Not Supported 00:28:24.089 Firmware Activate/Download: Not Supported 00:28:24.089 Namespace Management: Not Supported 00:28:24.089 Device Self-Test: Not Supported 00:28:24.089 Directives: Not Supported 00:28:24.089 NVMe-MI: Not Supported 00:28:24.089 Virtualization Management: Not Supported 00:28:24.089 Doorbell Buffer Config: Not Supported 00:28:24.089 Get LBA Status Capability: Not Supported 00:28:24.089 Command & Feature Lockdown Capability: Not Supported 00:28:24.089 Abort Command Limit: 1 00:28:24.089 Async Event Request Limit: 1 00:28:24.089 Number of Firmware Slots: N/A 00:28:24.089 Firmware Slot 1 Read-Only: N/A 00:28:24.089 Firmware Activation Without Reset: N/A 00:28:24.089 Multiple Update Detection Support: N/A 00:28:24.089 Firmware Update Granularity: No Information Provided 00:28:24.089 Per-Namespace SMART Log: No 00:28:24.089 Asymmetric Namespace Access Log Page: Not Supported 00:28:24.089 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:24.089 Command Effects Log Page: Not Supported 00:28:24.089 Get Log Page Extended Data: Supported 00:28:24.089 Telemetry Log Pages: Not Supported 00:28:24.089 Persistent Event Log Pages: Not Supported 00:28:24.089 Supported Log Pages Log Page: May Support 00:28:24.089 Commands Supported & Effects Log Page: Not Supported 00:28:24.089 Feature Identifiers & Effects Log Page:May Support 00:28:24.089 NVMe-MI Commands & Effects Log Page: May Support 00:28:24.089 Data Area 4 for Telemetry Log: Not Supported 00:28:24.089 Error Log Page Entries Supported: 1 00:28:24.089 Keep Alive: Not Supported 00:28:24.089 00:28:24.089 NVM Command Set Attributes 00:28:24.089 ========================== 00:28:24.089 Submission Queue Entry Size 00:28:24.089 Max: 1 00:28:24.089 Min: 1 00:28:24.089 Completion Queue Entry Size 00:28:24.089 Max: 1 00:28:24.089 Min: 1 00:28:24.089 Number of Namespaces: 0 00:28:24.089 Compare Command: Not Supported 00:28:24.089 Write Uncorrectable Command: Not Supported 00:28:24.089 Dataset Management Command: Not Supported 00:28:24.089 Write Zeroes Command: Not Supported 00:28:24.089 Set Features Save Field: Not Supported 00:28:24.089 Reservations: Not Supported 00:28:24.089 Timestamp: Not Supported 00:28:24.089 Copy: Not Supported 00:28:24.089 Volatile Write Cache: Not Present 00:28:24.089 Atomic Write Unit (Normal): 1 00:28:24.089 Atomic Write Unit (PFail): 1 00:28:24.089 Atomic Compare & Write Unit: 1 00:28:24.089 Fused Compare & Write: Not Supported 00:28:24.089 Scatter-Gather List 00:28:24.089 SGL Command Set: Supported 00:28:24.089 SGL Keyed: Not Supported 00:28:24.089 SGL Bit Bucket Descriptor: Not Supported 00:28:24.089 SGL Metadata Pointer: Not Supported 00:28:24.089 Oversized SGL: Not Supported 00:28:24.089 SGL Metadata Address: Not Supported 00:28:24.089 SGL Offset: Supported 00:28:24.089 Transport SGL Data Block: Not Supported 00:28:24.089 Replay Protected Memory Block: Not Supported 00:28:24.089 00:28:24.089 Firmware Slot Information 00:28:24.089 ========================= 00:28:24.089 Active slot: 0 00:28:24.089 00:28:24.089 00:28:24.089 Error Log 00:28:24.089 ========= 00:28:24.089 00:28:24.089 Active Namespaces 00:28:24.089 ================= 00:28:24.089 Discovery Log Page 00:28:24.089 ================== 00:28:24.089 Generation Counter: 2 00:28:24.089 Number of Records: 2 00:28:24.089 Record Format: 0 00:28:24.089 00:28:24.089 Discovery Log Entry 0 00:28:24.089 ---------------------- 00:28:24.089 Transport Type: 3 (TCP) 00:28:24.089 Address Family: 1 (IPv4) 00:28:24.089 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:24.089 Entry Flags: 00:28:24.089 Duplicate Returned Information: 0 00:28:24.089 Explicit Persistent Connection Support for Discovery: 0 00:28:24.089 Transport Requirements: 00:28:24.089 Secure Channel: Not Specified 00:28:24.089 Port ID: 1 (0x0001) 00:28:24.089 Controller ID: 65535 (0xffff) 00:28:24.089 Admin Max SQ Size: 32 00:28:24.089 Transport Service Identifier: 4420 00:28:24.089 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:24.089 Transport Address: 10.0.0.1 00:28:24.089 Discovery Log Entry 1 00:28:24.089 ---------------------- 00:28:24.089 Transport Type: 3 (TCP) 00:28:24.089 Address Family: 1 (IPv4) 00:28:24.089 Subsystem Type: 2 (NVM Subsystem) 00:28:24.089 Entry Flags: 00:28:24.089 Duplicate Returned Information: 0 00:28:24.089 Explicit Persistent Connection Support for Discovery: 0 00:28:24.089 Transport Requirements: 00:28:24.089 Secure Channel: Not Specified 00:28:24.089 Port ID: 1 (0x0001) 00:28:24.089 Controller ID: 65535 (0xffff) 00:28:24.089 Admin Max SQ Size: 32 00:28:24.089 Transport Service Identifier: 4420 00:28:24.089 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:24.089 Transport Address: 10.0.0.1 00:28:24.089 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:24.089 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.089 get_feature(0x01) failed 00:28:24.089 get_feature(0x02) failed 00:28:24.089 get_feature(0x04) failed 00:28:24.089 ===================================================== 00:28:24.089 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:24.089 ===================================================== 00:28:24.090 Controller Capabilities/Features 00:28:24.090 ================================ 00:28:24.090 Vendor ID: 0000 00:28:24.090 Subsystem Vendor ID: 0000 00:28:24.090 Serial Number: 4f8aa5ad3fa23cedd444 00:28:24.090 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:24.090 Firmware Version: 6.7.0-68 00:28:24.090 Recommended Arb Burst: 6 00:28:24.090 IEEE OUI Identifier: 00 00 00 00:28:24.090 Multi-path I/O 00:28:24.090 May have multiple subsystem ports: Yes 00:28:24.090 May have multiple controllers: Yes 00:28:24.090 Associated with SR-IOV VF: No 00:28:24.090 Max Data Transfer Size: Unlimited 00:28:24.090 Max Number of Namespaces: 1024 00:28:24.090 Max Number of I/O Queues: 128 00:28:24.090 NVMe Specification Version (VS): 1.3 00:28:24.090 NVMe Specification Version (Identify): 1.3 00:28:24.090 Maximum Queue Entries: 1024 00:28:24.090 Contiguous Queues Required: No 00:28:24.090 Arbitration Mechanisms Supported 00:28:24.090 Weighted Round Robin: Not Supported 00:28:24.090 Vendor Specific: Not Supported 00:28:24.090 Reset Timeout: 7500 ms 00:28:24.090 Doorbell Stride: 4 bytes 00:28:24.090 NVM Subsystem Reset: Not Supported 00:28:24.090 Command Sets Supported 00:28:24.090 NVM Command Set: Supported 00:28:24.090 Boot Partition: Not Supported 00:28:24.090 Memory Page Size Minimum: 4096 bytes 00:28:24.090 Memory Page Size Maximum: 4096 bytes 00:28:24.090 Persistent Memory Region: Not Supported 00:28:24.090 Optional Asynchronous Events Supported 00:28:24.090 Namespace Attribute Notices: Supported 00:28:24.090 Firmware Activation Notices: Not Supported 00:28:24.090 ANA Change Notices: Supported 00:28:24.090 PLE Aggregate Log Change Notices: Not Supported 00:28:24.090 LBA Status Info Alert Notices: Not Supported 00:28:24.090 EGE Aggregate Log Change Notices: Not Supported 00:28:24.090 Normal NVM Subsystem Shutdown event: Not Supported 00:28:24.090 Zone Descriptor Change Notices: Not Supported 00:28:24.090 Discovery Log Change Notices: Not Supported 00:28:24.090 Controller Attributes 00:28:24.090 128-bit Host Identifier: Supported 00:28:24.090 Non-Operational Permissive Mode: Not Supported 00:28:24.090 NVM Sets: Not Supported 00:28:24.090 Read Recovery Levels: Not Supported 00:28:24.090 Endurance Groups: Not Supported 00:28:24.090 Predictable Latency Mode: Not Supported 00:28:24.090 Traffic Based Keep ALive: Supported 00:28:24.090 Namespace Granularity: Not Supported 00:28:24.090 SQ Associations: Not Supported 00:28:24.090 UUID List: Not Supported 00:28:24.090 Multi-Domain Subsystem: Not Supported 00:28:24.090 Fixed Capacity Management: Not Supported 00:28:24.090 Variable Capacity Management: Not Supported 00:28:24.090 Delete Endurance Group: Not Supported 00:28:24.090 Delete NVM Set: Not Supported 00:28:24.090 Extended LBA Formats Supported: Not Supported 00:28:24.090 Flexible Data Placement Supported: Not Supported 00:28:24.090 00:28:24.090 Controller Memory Buffer Support 00:28:24.090 ================================ 00:28:24.090 Supported: No 00:28:24.090 00:28:24.090 Persistent Memory Region Support 00:28:24.090 ================================ 00:28:24.090 Supported: No 00:28:24.090 00:28:24.090 Admin Command Set Attributes 00:28:24.090 ============================ 00:28:24.090 Security Send/Receive: Not Supported 00:28:24.090 Format NVM: Not Supported 00:28:24.090 Firmware Activate/Download: Not Supported 00:28:24.090 Namespace Management: Not Supported 00:28:24.090 Device Self-Test: Not Supported 00:28:24.090 Directives: Not Supported 00:28:24.090 NVMe-MI: Not Supported 00:28:24.090 Virtualization Management: Not Supported 00:28:24.090 Doorbell Buffer Config: Not Supported 00:28:24.090 Get LBA Status Capability: Not Supported 00:28:24.090 Command & Feature Lockdown Capability: Not Supported 00:28:24.090 Abort Command Limit: 4 00:28:24.090 Async Event Request Limit: 4 00:28:24.090 Number of Firmware Slots: N/A 00:28:24.090 Firmware Slot 1 Read-Only: N/A 00:28:24.090 Firmware Activation Without Reset: N/A 00:28:24.090 Multiple Update Detection Support: N/A 00:28:24.090 Firmware Update Granularity: No Information Provided 00:28:24.090 Per-Namespace SMART Log: Yes 00:28:24.090 Asymmetric Namespace Access Log Page: Supported 00:28:24.090 ANA Transition Time : 10 sec 00:28:24.090 00:28:24.090 Asymmetric Namespace Access Capabilities 00:28:24.090 ANA Optimized State : Supported 00:28:24.090 ANA Non-Optimized State : Supported 00:28:24.090 ANA Inaccessible State : Supported 00:28:24.090 ANA Persistent Loss State : Supported 00:28:24.090 ANA Change State : Supported 00:28:24.090 ANAGRPID is not changed : No 00:28:24.090 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:24.090 00:28:24.090 ANA Group Identifier Maximum : 128 00:28:24.090 Number of ANA Group Identifiers : 128 00:28:24.090 Max Number of Allowed Namespaces : 1024 00:28:24.090 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:24.090 Command Effects Log Page: Supported 00:28:24.090 Get Log Page Extended Data: Supported 00:28:24.090 Telemetry Log Pages: Not Supported 00:28:24.090 Persistent Event Log Pages: Not Supported 00:28:24.090 Supported Log Pages Log Page: May Support 00:28:24.090 Commands Supported & Effects Log Page: Not Supported 00:28:24.090 Feature Identifiers & Effects Log Page:May Support 00:28:24.090 NVMe-MI Commands & Effects Log Page: May Support 00:28:24.090 Data Area 4 for Telemetry Log: Not Supported 00:28:24.090 Error Log Page Entries Supported: 128 00:28:24.090 Keep Alive: Supported 00:28:24.090 Keep Alive Granularity: 1000 ms 00:28:24.090 00:28:24.090 NVM Command Set Attributes 00:28:24.090 ========================== 00:28:24.090 Submission Queue Entry Size 00:28:24.090 Max: 64 00:28:24.090 Min: 64 00:28:24.090 Completion Queue Entry Size 00:28:24.090 Max: 16 00:28:24.090 Min: 16 00:28:24.090 Number of Namespaces: 1024 00:28:24.090 Compare Command: Not Supported 00:28:24.090 Write Uncorrectable Command: Not Supported 00:28:24.090 Dataset Management Command: Supported 00:28:24.090 Write Zeroes Command: Supported 00:28:24.090 Set Features Save Field: Not Supported 00:28:24.090 Reservations: Not Supported 00:28:24.090 Timestamp: Not Supported 00:28:24.090 Copy: Not Supported 00:28:24.090 Volatile Write Cache: Present 00:28:24.090 Atomic Write Unit (Normal): 1 00:28:24.090 Atomic Write Unit (PFail): 1 00:28:24.090 Atomic Compare & Write Unit: 1 00:28:24.090 Fused Compare & Write: Not Supported 00:28:24.090 Scatter-Gather List 00:28:24.090 SGL Command Set: Supported 00:28:24.090 SGL Keyed: Not Supported 00:28:24.090 SGL Bit Bucket Descriptor: Not Supported 00:28:24.090 SGL Metadata Pointer: Not Supported 00:28:24.090 Oversized SGL: Not Supported 00:28:24.090 SGL Metadata Address: Not Supported 00:28:24.090 SGL Offset: Supported 00:28:24.090 Transport SGL Data Block: Not Supported 00:28:24.090 Replay Protected Memory Block: Not Supported 00:28:24.090 00:28:24.090 Firmware Slot Information 00:28:24.090 ========================= 00:28:24.090 Active slot: 0 00:28:24.090 00:28:24.090 Asymmetric Namespace Access 00:28:24.090 =========================== 00:28:24.090 Change Count : 0 00:28:24.090 Number of ANA Group Descriptors : 1 00:28:24.090 ANA Group Descriptor : 0 00:28:24.090 ANA Group ID : 1 00:28:24.090 Number of NSID Values : 1 00:28:24.090 Change Count : 0 00:28:24.090 ANA State : 1 00:28:24.090 Namespace Identifier : 1 00:28:24.090 00:28:24.090 Commands Supported and Effects 00:28:24.090 ============================== 00:28:24.090 Admin Commands 00:28:24.090 -------------- 00:28:24.090 Get Log Page (02h): Supported 00:28:24.090 Identify (06h): Supported 00:28:24.090 Abort (08h): Supported 00:28:24.090 Set Features (09h): Supported 00:28:24.090 Get Features (0Ah): Supported 00:28:24.090 Asynchronous Event Request (0Ch): Supported 00:28:24.090 Keep Alive (18h): Supported 00:28:24.090 I/O Commands 00:28:24.090 ------------ 00:28:24.090 Flush (00h): Supported 00:28:24.090 Write (01h): Supported LBA-Change 00:28:24.090 Read (02h): Supported 00:28:24.090 Write Zeroes (08h): Supported LBA-Change 00:28:24.090 Dataset Management (09h): Supported 00:28:24.090 00:28:24.090 Error Log 00:28:24.090 ========= 00:28:24.090 Entry: 0 00:28:24.090 Error Count: 0x3 00:28:24.090 Submission Queue Id: 0x0 00:28:24.090 Command Id: 0x5 00:28:24.090 Phase Bit: 0 00:28:24.090 Status Code: 0x2 00:28:24.090 Status Code Type: 0x0 00:28:24.090 Do Not Retry: 1 00:28:24.090 Error Location: 0x28 00:28:24.090 LBA: 0x0 00:28:24.090 Namespace: 0x0 00:28:24.090 Vendor Log Page: 0x0 00:28:24.090 ----------- 00:28:24.090 Entry: 1 00:28:24.090 Error Count: 0x2 00:28:24.090 Submission Queue Id: 0x0 00:28:24.090 Command Id: 0x5 00:28:24.090 Phase Bit: 0 00:28:24.090 Status Code: 0x2 00:28:24.090 Status Code Type: 0x0 00:28:24.090 Do Not Retry: 1 00:28:24.090 Error Location: 0x28 00:28:24.090 LBA: 0x0 00:28:24.090 Namespace: 0x0 00:28:24.090 Vendor Log Page: 0x0 00:28:24.090 ----------- 00:28:24.090 Entry: 2 00:28:24.090 Error Count: 0x1 00:28:24.090 Submission Queue Id: 0x0 00:28:24.090 Command Id: 0x4 00:28:24.090 Phase Bit: 0 00:28:24.090 Status Code: 0x2 00:28:24.090 Status Code Type: 0x0 00:28:24.090 Do Not Retry: 1 00:28:24.090 Error Location: 0x28 00:28:24.091 LBA: 0x0 00:28:24.091 Namespace: 0x0 00:28:24.091 Vendor Log Page: 0x0 00:28:24.091 00:28:24.091 Number of Queues 00:28:24.091 ================ 00:28:24.091 Number of I/O Submission Queues: 128 00:28:24.091 Number of I/O Completion Queues: 128 00:28:24.091 00:28:24.091 ZNS Specific Controller Data 00:28:24.091 ============================ 00:28:24.091 Zone Append Size Limit: 0 00:28:24.091 00:28:24.091 00:28:24.091 Active Namespaces 00:28:24.091 ================= 00:28:24.091 get_feature(0x05) failed 00:28:24.091 Namespace ID:1 00:28:24.091 Command Set Identifier: NVM (00h) 00:28:24.091 Deallocate: Supported 00:28:24.091 Deallocated/Unwritten Error: Not Supported 00:28:24.091 Deallocated Read Value: Unknown 00:28:24.091 Deallocate in Write Zeroes: Not Supported 00:28:24.091 Deallocated Guard Field: 0xFFFF 00:28:24.091 Flush: Supported 00:28:24.091 Reservation: Not Supported 00:28:24.091 Namespace Sharing Capabilities: Multiple Controllers 00:28:24.091 Size (in LBAs): 3750748848 (1788GiB) 00:28:24.091 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:24.091 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:24.091 UUID: f890adf6-5b9d-46fe-81ab-9e5a56b595d9 00:28:24.091 Thin Provisioning: Not Supported 00:28:24.091 Per-NS Atomic Units: Yes 00:28:24.091 Atomic Write Unit (Normal): 8 00:28:24.091 Atomic Write Unit (PFail): 8 00:28:24.091 Preferred Write Granularity: 8 00:28:24.091 Atomic Compare & Write Unit: 8 00:28:24.091 Atomic Boundary Size (Normal): 0 00:28:24.091 Atomic Boundary Size (PFail): 0 00:28:24.091 Atomic Boundary Offset: 0 00:28:24.091 NGUID/EUI64 Never Reused: No 00:28:24.091 ANA group ID: 1 00:28:24.091 Namespace Write Protected: No 00:28:24.091 Number of LBA Formats: 1 00:28:24.091 Current LBA Format: LBA Format #00 00:28:24.091 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:24.091 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:24.091 rmmod nvme_tcp 00:28:24.091 rmmod nvme_fabrics 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:24.091 15:33:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.635 15:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:26.635 15:33:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:26.635 15:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:26.635 15:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:28:26.635 15:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:26.635 15:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:26.635 15:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:26.635 15:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:26.635 15:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:26.635 15:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:26.635 15:33:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:29.932 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:29.932 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:29.932 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:29.932 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:29.932 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:29.932 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:29.932 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:29.932 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:29.932 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:29.932 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:29.932 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:29.932 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:29.932 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:29.932 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:29.932 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:29.932 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:29.932 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:29.932 00:28:29.932 real 0m18.853s 00:28:29.932 user 0m4.800s 00:28:29.932 sys 0m11.003s 00:28:29.932 15:33:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:29.932 15:33:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:29.932 ************************************ 00:28:29.932 END TEST nvmf_identify_kernel_target 00:28:29.932 ************************************ 00:28:29.932 15:33:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:29.932 15:33:39 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:29.932 15:33:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:29.932 15:33:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:29.932 15:33:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:29.932 ************************************ 00:28:29.932 START TEST nvmf_auth_host 00:28:29.932 ************************************ 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:29.932 * Looking for test storage... 00:28:29.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:29.932 15:33:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:38.067 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:38.067 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:38.067 Found net devices under 0000:31:00.0: cvl_0_0 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:38.067 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:38.068 Found net devices under 0000:31:00.1: cvl_0_1 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:38.068 15:33:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:38.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:38.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.738 ms 00:28:38.068 00:28:38.068 --- 10.0.0.2 ping statistics --- 00:28:38.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.068 rtt min/avg/max/mdev = 0.738/0.738/0.738/0.000 ms 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:38.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:38.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:28:38.068 00:28:38.068 --- 10.0.0.1 ping statistics --- 00:28:38.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.068 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=865639 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 865639 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 865639 ']' 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:38.068 15:33:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.639 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:38.639 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:38.639 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:38.639 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:38.639 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.639 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.639 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:38.639 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:38.639 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:38.639 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:38.639 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:38.639 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:38.639 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:38.639 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e93e86509f1133c69709cf32611c665b 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Nz9 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e93e86509f1133c69709cf32611c665b 0 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e93e86509f1133c69709cf32611c665b 0 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e93e86509f1133c69709cf32611c665b 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Nz9 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Nz9 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Nz9 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b3ebec2d87544d6bfebecd1c991237611e0ec11fc55ab18f654610f6a45415b6 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.hTo 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b3ebec2d87544d6bfebecd1c991237611e0ec11fc55ab18f654610f6a45415b6 3 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b3ebec2d87544d6bfebecd1c991237611e0ec11fc55ab18f654610f6a45415b6 3 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b3ebec2d87544d6bfebecd1c991237611e0ec11fc55ab18f654610f6a45415b6 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:38.640 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.hTo 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.hTo 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.hTo 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ff2912335f5649258807bbd2f167b2fe6489727416334401 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.r6R 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ff2912335f5649258807bbd2f167b2fe6489727416334401 0 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ff2912335f5649258807bbd2f167b2fe6489727416334401 0 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ff2912335f5649258807bbd2f167b2fe6489727416334401 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.r6R 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.r6R 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.r6R 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:38.901 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4d97bb263f90a0496782b1a6f3f4b2a01d73d1716c90e40e 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Kg8 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4d97bb263f90a0496782b1a6f3f4b2a01d73d1716c90e40e 2 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4d97bb263f90a0496782b1a6f3f4b2a01d73d1716c90e40e 2 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4d97bb263f90a0496782b1a6f3f4b2a01d73d1716c90e40e 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Kg8 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Kg8 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Kg8 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=34343bdae8cc98d4fbd27822d9a55b8d 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.87p 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 34343bdae8cc98d4fbd27822d9a55b8d 1 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 34343bdae8cc98d4fbd27822d9a55b8d 1 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=34343bdae8cc98d4fbd27822d9a55b8d 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.87p 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.87p 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.87p 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f29c1751af4c300116045d4c08e5fdb4 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.CSQ 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f29c1751af4c300116045d4c08e5fdb4 1 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f29c1751af4c300116045d4c08e5fdb4 1 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f29c1751af4c300116045d4c08e5fdb4 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:38.902 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.CSQ 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.CSQ 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.CSQ 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cc4e5c72f3aad7c3930d7068d73c50f8ac8496dae2ce9819 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.4Cx 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cc4e5c72f3aad7c3930d7068d73c50f8ac8496dae2ce9819 2 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cc4e5c72f3aad7c3930d7068d73c50f8ac8496dae2ce9819 2 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cc4e5c72f3aad7c3930d7068d73c50f8ac8496dae2ce9819 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.4Cx 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.4Cx 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.4Cx 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7e7057ae174cef30d9955ac577886d11 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1RA 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7e7057ae174cef30d9955ac577886d11 0 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7e7057ae174cef30d9955ac577886d11 0 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7e7057ae174cef30d9955ac577886d11 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1RA 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1RA 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.1RA 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:39.163 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ba0002c7a40da21949a0d91616cca9cb6ce55831b4d12bc510b622add0c74120 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.wIq 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ba0002c7a40da21949a0d91616cca9cb6ce55831b4d12bc510b622add0c74120 3 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ba0002c7a40da21949a0d91616cca9cb6ce55831b4d12bc510b622add0c74120 3 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ba0002c7a40da21949a0d91616cca9cb6ce55831b4d12bc510b622add0c74120 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.wIq 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.wIq 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.wIq 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 865639 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 865639 ']' 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:39.164 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Nz9 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.hTo ]] 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hTo 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.r6R 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Kg8 ]] 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Kg8 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.87p 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.CSQ ]] 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.CSQ 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.4Cx 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.1RA ]] 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.1RA 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.424 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.wIq 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:39.425 15:33:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:39.425 15:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:39.425 15:33:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:43.658 Waiting for block devices as requested 00:28:43.658 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:43.658 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:43.658 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:43.658 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:43.658 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:43.658 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:43.658 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:43.658 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:43.658 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:43.917 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:43.917 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:43.917 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:43.917 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:44.178 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:44.178 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:44.178 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:44.439 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:45.010 No valid GPT data, bailing 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:28:45.010 00:28:45.010 Discovery Log Number of Records 2, Generation counter 2 00:28:45.010 =====Discovery Log Entry 0====== 00:28:45.010 trtype: tcp 00:28:45.010 adrfam: ipv4 00:28:45.010 subtype: current discovery subsystem 00:28:45.010 treq: not specified, sq flow control disable supported 00:28:45.010 portid: 1 00:28:45.010 trsvcid: 4420 00:28:45.010 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:45.010 traddr: 10.0.0.1 00:28:45.010 eflags: none 00:28:45.010 sectype: none 00:28:45.010 =====Discovery Log Entry 1====== 00:28:45.010 trtype: tcp 00:28:45.010 adrfam: ipv4 00:28:45.010 subtype: nvme subsystem 00:28:45.010 treq: not specified, sq flow control disable supported 00:28:45.010 portid: 1 00:28:45.010 trsvcid: 4420 00:28:45.010 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:45.010 traddr: 10.0.0.1 00:28:45.010 eflags: none 00:28:45.010 sectype: none 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:45.010 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.011 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.272 nvme0n1 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: ]] 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.272 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.533 nvme0n1 00:28:45.533 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.533 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.533 15:33:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.534 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.534 15:33:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.534 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.820 nvme0n1 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: ]] 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.820 nvme0n1 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.820 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: ]] 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.083 nvme0n1 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.083 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.345 nvme0n1 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: ]] 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.345 15:33:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.606 nvme0n1 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.606 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.867 nvme0n1 00:28:46.867 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.867 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: ]] 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.868 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.129 nvme0n1 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: ]] 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.129 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.391 nvme0n1 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.391 15:33:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.652 nvme0n1 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: ]] 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.652 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.913 nvme0n1 00:28:47.913 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.913 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.913 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.913 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.913 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.913 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.174 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.174 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.174 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.174 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.175 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.436 nvme0n1 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: ]] 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.437 15:33:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.697 nvme0n1 00:28:48.697 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.697 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.697 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.697 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.697 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: ]] 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.698 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.960 nvme0n1 00:28:48.960 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.960 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.960 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.960 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.960 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.960 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.221 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.483 nvme0n1 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: ]] 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.483 15:33:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.051 nvme0n1 00:28:50.051 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.051 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.051 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.051 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.051 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.051 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.052 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.312 nvme0n1 00:28:50.312 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: ]] 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.573 15:33:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.573 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.573 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.573 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:50.573 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:50.573 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:50.573 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.573 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.573 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:50.573 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.573 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:50.573 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:50.573 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:50.573 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:50.573 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.573 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.143 nvme0n1 00:28:51.143 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.143 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.143 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.143 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.143 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.143 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.143 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.143 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.143 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: ]] 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.144 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.404 nvme0n1 00:28:51.404 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.404 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.404 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.404 15:34:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.404 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.404 15:34:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.663 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.664 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.664 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:51.664 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.664 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.934 nvme0n1 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: ]] 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.934 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.194 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.194 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:52.194 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:52.194 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:52.194 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.194 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.194 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:52.194 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.194 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:52.194 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:52.194 15:34:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:52.194 15:34:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:52.194 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.194 15:34:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.764 nvme0n1 00:28:52.764 15:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.764 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.764 15:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.764 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.764 15:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.765 15:34:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.703 nvme0n1 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: ]] 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.703 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:53.704 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.704 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:53.704 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:53.704 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:53.704 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:53.704 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.704 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.641 nvme0n1 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: ]] 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:54.642 15:34:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.211 nvme0n1 00:28:55.211 15:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.211 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.211 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.211 15:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.211 15:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.211 15:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.211 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.211 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.211 15:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.211 15:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.211 15:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.211 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.211 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:55.212 15:34:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.166 nvme0n1 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: ]] 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.166 nvme0n1 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.166 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.426 nvme0n1 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.426 15:34:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: ]] 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.426 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.686 nvme0n1 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: ]] 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:56.686 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.687 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.687 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.687 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.687 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:56.687 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:56.687 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:56.687 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.687 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.687 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:56.687 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.687 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:56.687 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:56.687 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:56.687 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:56.687 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.687 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.947 nvme0n1 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:56.947 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.948 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:56.948 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:56.948 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:56.948 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.948 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.948 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:56.948 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.948 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:56.948 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:56.948 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:56.948 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:56.948 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.948 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.207 nvme0n1 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: ]] 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.207 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.468 nvme0n1 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.468 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.469 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.469 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:57.469 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:57.469 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:57.469 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.469 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.469 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:57.469 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.469 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:57.469 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:57.469 15:34:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:57.469 15:34:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:57.469 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.469 15:34:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.744 nvme0n1 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: ]] 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.744 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.028 nvme0n1 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:58.028 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: ]] 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.029 nvme0n1 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.029 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.289 nvme0n1 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.289 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.550 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.550 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.550 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.550 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.550 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.550 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.550 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:58.550 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: ]] 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.551 15:34:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.812 nvme0n1 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:58.812 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.073 nvme0n1 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: ]] 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.073 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.334 nvme0n1 00:28:59.334 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.334 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.334 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.334 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.334 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.334 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: ]] 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.595 15:34:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.595 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.595 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.595 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:59.595 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:59.595 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:59.595 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.595 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.595 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:59.595 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.595 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:59.595 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:59.595 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:59.595 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:59.595 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.595 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.856 nvme0n1 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.856 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.118 nvme0n1 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: ]] 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.118 15:34:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.690 nvme0n1 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.690 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.261 nvme0n1 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: ]] 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.261 15:34:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.832 nvme0n1 00:29:01.832 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.832 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.832 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.832 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.832 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.832 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.832 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.832 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: ]] 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.833 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.094 nvme0n1 00:29:02.094 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.094 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.094 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.094 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.094 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.094 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.355 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.355 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.355 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.355 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.356 15:34:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.617 nvme0n1 00:29:02.617 15:34:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.617 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.617 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.617 15:34:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.617 15:34:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.617 15:34:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: ]] 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.879 15:34:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.451 nvme0n1 00:29:03.451 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.451 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.451 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.451 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.451 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.451 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.712 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:03.713 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.713 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:03.713 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:03.713 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:03.713 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:03.713 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.713 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.282 nvme0n1 00:29:04.282 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.282 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.282 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.282 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.282 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.282 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.282 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.282 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.282 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.282 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: ]] 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:04.542 15:34:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:04.543 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:04.543 15:34:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.112 nvme0n1 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: ]] 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.112 15:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.373 15:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.373 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.373 15:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:05.373 15:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:05.373 15:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:05.373 15:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.373 15:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.373 15:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:05.373 15:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.373 15:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:05.373 15:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:05.373 15:34:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:05.373 15:34:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:05.373 15:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.373 15:34:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.956 nvme0n1 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.956 15:34:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.898 nvme0n1 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: ]] 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.898 nvme0n1 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.898 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.899 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:06.899 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.160 nvme0n1 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: ]] 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.160 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.421 nvme0n1 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: ]] 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.421 15:34:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.681 nvme0n1 00:29:07.681 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.681 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.681 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.681 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.681 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.681 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.681 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.681 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.681 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.681 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.681 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.681 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.681 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.682 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.943 nvme0n1 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: ]] 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.943 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.204 nvme0n1 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.204 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.465 nvme0n1 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: ]] 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.465 15:34:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.725 nvme0n1 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:08.725 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: ]] 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.726 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.987 nvme0n1 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.987 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.248 nvme0n1 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: ]] 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:09.248 15:34:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:09.249 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:09.249 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.249 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.510 nvme0n1 00:29:09.510 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.510 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.510 15:34:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.510 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.510 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.510 15:34:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.510 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.771 nvme0n1 00:29:09.771 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.771 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.771 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.771 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.771 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.771 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.771 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.771 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.771 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.771 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.771 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: ]] 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.032 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.293 nvme0n1 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: ]] 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.293 15:34:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.555 nvme0n1 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.555 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.816 nvme0n1 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:29:10.816 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:29:10.817 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:11.077 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:11.077 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:29:11.077 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: ]] 00:29:11.077 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.078 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.338 nvme0n1 00:29:11.338 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.338 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.338 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.338 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.338 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.338 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.338 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.338 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.338 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.338 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.599 15:34:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.873 nvme0n1 00:29:11.873 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.873 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.873 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.873 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.873 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.873 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.873 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.873 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.873 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.873 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: ]] 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.165 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.426 nvme0n1 00:29:12.426 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.426 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.426 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.426 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.426 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.426 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.426 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.426 15:34:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.426 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.426 15:34:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: ]] 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.426 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.997 nvme0n1 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:29:12.997 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:12.998 15:34:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.569 nvme0n1 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTkzZTg2NTA5ZjExMzNjNjk3MDljZjMyNjExYzY2NWLhBID4: 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: ]] 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNlYmVjMmQ4NzU0NGQ2YmZlYmVjZDFjOTkxMjM3NjExZTBlYzExZmM1NWFiMThmNjU0NjEwZjZhNDU0MTViNneUHmA=: 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:13.569 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.511 nvme0n1 00:29:14.511 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.511 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.511 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.511 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.511 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.511 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:14.512 15:34:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.085 nvme0n1 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzQzNDNiZGFlOGNjOThkNGZiZDI3ODIyZDlhNTViOGRuze7i: 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: ]] 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZjI5YzE3NTFhZjRjMzAwMTE2MDQ1ZDRjMDhlNWZkYjSxJLoE: 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:15.085 15:34:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.028 nvme0n1 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M0ZTVjNzJmM2FhZDdjMzkzMGQ3MDY4ZDczYzUwZjhhYzg0OTZkYWUyY2U5ODE5ijaKTw==: 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: ]] 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2U3MDU3YWUxNzRjZWYzMGQ5OTU1YWM1Nzc4ODZkMTEA+TDz: 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.028 15:34:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.598 nvme0n1 00:29:16.598 15:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.598 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.598 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.598 15:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.598 15:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmEwMDAyYzdhNDBkYTIxOTQ5YTBkOTE2MTZjY2E5Y2I2Y2U1NTgzMWI0ZDEyYmM1MTBiNjIyYWRkMGM3NDEyMALSEiE=: 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:16.859 15:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.432 nvme0n1 00:29:17.432 15:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.432 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.432 15:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.432 15:34:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.432 15:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.432 15:34:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmYyOTEyMzM1ZjU2NDkyNTg4MDdiYmQyZjE2N2IyZmU2NDg5NzI3NDE2MzM0NDAxe8KmHw==: 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: ]] 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5N2JiMjYzZjkwYTA0OTY3ODJiMWE2ZjNmNGIyYTAxZDczZDE3MTZjOTBlNDBljubvIw==: 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.432 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.694 request: 00:29:17.694 { 00:29:17.694 "name": "nvme0", 00:29:17.694 "trtype": "tcp", 00:29:17.694 "traddr": "10.0.0.1", 00:29:17.694 "adrfam": "ipv4", 00:29:17.694 "trsvcid": "4420", 00:29:17.694 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:17.694 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:17.694 "prchk_reftag": false, 00:29:17.694 "prchk_guard": false, 00:29:17.694 "hdgst": false, 00:29:17.694 "ddgst": false, 00:29:17.694 "method": "bdev_nvme_attach_controller", 00:29:17.694 "req_id": 1 00:29:17.694 } 00:29:17.694 Got JSON-RPC error response 00:29:17.694 response: 00:29:17.694 { 00:29:17.694 "code": -5, 00:29:17.694 "message": "Input/output error" 00:29:17.694 } 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.694 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.695 request: 00:29:17.695 { 00:29:17.695 "name": "nvme0", 00:29:17.695 "trtype": "tcp", 00:29:17.695 "traddr": "10.0.0.1", 00:29:17.695 "adrfam": "ipv4", 00:29:17.695 "trsvcid": "4420", 00:29:17.695 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:17.695 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:17.695 "prchk_reftag": false, 00:29:17.695 "prchk_guard": false, 00:29:17.695 "hdgst": false, 00:29:17.695 "ddgst": false, 00:29:17.695 "dhchap_key": "key2", 00:29:17.695 "method": "bdev_nvme_attach_controller", 00:29:17.695 "req_id": 1 00:29:17.695 } 00:29:17.695 Got JSON-RPC error response 00:29:17.695 response: 00:29:17.695 { 00:29:17.695 "code": -5, 00:29:17.695 "message": "Input/output error" 00:29:17.695 } 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.695 request: 00:29:17.695 { 00:29:17.695 "name": "nvme0", 00:29:17.695 "trtype": "tcp", 00:29:17.695 "traddr": "10.0.0.1", 00:29:17.695 "adrfam": "ipv4", 00:29:17.695 "trsvcid": "4420", 00:29:17.695 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:17.695 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:17.695 "prchk_reftag": false, 00:29:17.695 "prchk_guard": false, 00:29:17.695 "hdgst": false, 00:29:17.695 "ddgst": false, 00:29:17.695 "dhchap_key": "key1", 00:29:17.695 "dhchap_ctrlr_key": "ckey2", 00:29:17.695 "method": "bdev_nvme_attach_controller", 00:29:17.695 "req_id": 1 00:29:17.695 } 00:29:17.695 Got JSON-RPC error response 00:29:17.695 response: 00:29:17.695 { 00:29:17.695 "code": -5, 00:29:17.695 "message": "Input/output error" 00:29:17.695 } 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:17.695 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:17.695 rmmod nvme_tcp 00:29:17.956 rmmod nvme_fabrics 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 865639 ']' 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 865639 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 865639 ']' 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 865639 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 865639 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 865639' 00:29:17.956 killing process with pid 865639 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 865639 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 865639 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:17.956 15:34:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.495 15:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:20.495 15:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:20.495 15:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:20.495 15:34:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:20.495 15:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:20.495 15:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:29:20.495 15:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:20.495 15:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:20.495 15:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:20.495 15:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:20.496 15:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:20.496 15:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:20.496 15:34:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:23.794 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:23.794 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:23.794 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:23.794 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:23.794 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:23.794 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:23.794 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:23.794 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:23.794 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:23.794 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:23.794 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:23.794 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:23.794 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:23.794 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:23.794 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:23.794 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:24.055 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:24.055 15:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Nz9 /tmp/spdk.key-null.r6R /tmp/spdk.key-sha256.87p /tmp/spdk.key-sha384.4Cx /tmp/spdk.key-sha512.wIq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:24.055 15:34:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:27.358 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:27.358 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:27.358 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:27.358 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:27.358 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:27.358 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:27.358 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:27.358 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:27.358 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:27.358 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:27.358 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:27.358 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:27.358 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:27.358 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:27.358 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:27.358 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:27.358 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:27.358 00:29:27.358 real 0m57.463s 00:29:27.358 user 0m50.936s 00:29:27.358 sys 0m15.353s 00:29:27.358 15:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:27.358 15:34:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:27.358 ************************************ 00:29:27.358 END TEST nvmf_auth_host 00:29:27.358 ************************************ 00:29:27.358 15:34:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:27.358 15:34:36 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:29:27.358 15:34:36 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:27.358 15:34:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:27.358 15:34:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.358 15:34:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:27.358 ************************************ 00:29:27.358 START TEST nvmf_digest 00:29:27.358 ************************************ 00:29:27.358 15:34:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:27.644 * Looking for test storage... 00:29:27.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:27.644 15:34:37 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.644 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:27.644 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.644 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:29:27.645 15:34:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:35.797 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:35.797 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:35.797 Found net devices under 0000:31:00.0: cvl_0_0 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:35.797 Found net devices under 0000:31:00.1: cvl_0_1 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.797 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:35.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.730 ms 00:29:35.798 00:29:35.798 --- 10.0.0.2 ping statistics --- 00:29:35.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.798 rtt min/avg/max/mdev = 0.730/0.730/0.730/0.000 ms 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:29:35.798 00:29:35.798 --- 10.0.0.1 ping statistics --- 00:29:35.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.798 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:35.798 ************************************ 00:29:35.798 START TEST nvmf_digest_clean 00:29:35.798 ************************************ 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=882463 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 882463 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 882463 ']' 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:35.798 15:34:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:35.798 [2024-07-15 15:34:44.651011] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:35.798 [2024-07-15 15:34:44.651057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.798 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.798 [2024-07-15 15:34:44.720670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.798 [2024-07-15 15:34:44.783737] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.798 [2024-07-15 15:34:44.783771] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.798 [2024-07-15 15:34:44.783779] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.798 [2024-07-15 15:34:44.783785] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.798 [2024-07-15 15:34:44.783790] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.798 [2024-07-15 15:34:44.783809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.798 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:35.798 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:35.798 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:35.798 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:35.798 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:36.134 null0 00:29:36.134 [2024-07-15 15:34:45.522529] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.134 [2024-07-15 15:34:45.546715] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=882511 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 882511 /var/tmp/bperf.sock 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 882511 ']' 00:29:36.134 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:36.135 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:36.135 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:36.135 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:36.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:36.135 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:36.135 15:34:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:36.135 [2024-07-15 15:34:45.602913] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:36.135 [2024-07-15 15:34:45.602960] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid882511 ] 00:29:36.135 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.135 [2024-07-15 15:34:45.665959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.135 [2024-07-15 15:34:45.730776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.077 15:34:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:37.077 15:34:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:37.077 15:34:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:37.077 15:34:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:37.077 15:34:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:37.077 15:34:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:37.077 15:34:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:37.339 nvme0n1 00:29:37.339 15:34:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:37.339 15:34:46 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:37.600 Running I/O for 2 seconds... 00:29:39.514 00:29:39.514 Latency(us) 00:29:39.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.514 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:39.514 nvme0n1 : 2.00 19696.84 76.94 0.00 0.00 6489.85 3031.04 16820.91 00:29:39.514 =================================================================================================================== 00:29:39.514 Total : 19696.84 76.94 0.00 0.00 6489.85 3031.04 16820.91 00:29:39.514 0 00:29:39.514 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:39.514 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:39.514 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:39.514 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:39.514 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:39.514 | select(.opcode=="crc32c") 00:29:39.514 | "\(.module_name) \(.executed)"' 00:29:39.775 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:39.775 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:39.775 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:39.775 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:39.775 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 882511 00:29:39.775 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 882511 ']' 00:29:39.775 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 882511 00:29:39.775 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:39.775 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:39.775 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 882511 00:29:39.775 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:39.775 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:39.775 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 882511' 00:29:39.775 killing process with pid 882511 00:29:39.775 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 882511 00:29:39.775 Received shutdown signal, test time was about 2.000000 seconds 00:29:39.775 00:29:39.775 Latency(us) 00:29:39.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.775 =================================================================================================================== 00:29:39.775 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:39.775 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 882511 00:29:39.775 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:40.037 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:40.037 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:40.037 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:40.037 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:40.037 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:40.037 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:40.037 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=883346 00:29:40.037 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 883346 /var/tmp/bperf.sock 00:29:40.037 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 883346 ']' 00:29:40.037 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:40.037 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:40.037 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:40.037 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:40.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:40.037 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:40.037 15:34:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:40.037 [2024-07-15 15:34:49.442747] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:40.037 [2024-07-15 15:34:49.442800] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883346 ] 00:29:40.037 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:40.037 Zero copy mechanism will not be used. 00:29:40.037 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.037 [2024-07-15 15:34:49.506038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.037 [2024-07-15 15:34:49.569939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.608 15:34:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:40.608 15:34:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:40.608 15:34:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:40.608 15:34:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:40.608 15:34:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:40.869 15:34:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:40.869 15:34:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:41.441 nvme0n1 00:29:41.441 15:34:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:41.441 15:34:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:41.441 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:41.441 Zero copy mechanism will not be used. 00:29:41.441 Running I/O for 2 seconds... 00:29:43.353 00:29:43.354 Latency(us) 00:29:43.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.354 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:43.354 nvme0n1 : 2.00 3573.90 446.74 0.00 0.00 4472.79 907.95 6690.13 00:29:43.354 =================================================================================================================== 00:29:43.354 Total : 3573.90 446.74 0.00 0.00 4472.79 907.95 6690.13 00:29:43.354 0 00:29:43.354 15:34:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:43.354 15:34:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:43.354 15:34:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:43.354 15:34:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:43.354 | select(.opcode=="crc32c") 00:29:43.354 | "\(.module_name) \(.executed)"' 00:29:43.354 15:34:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:43.614 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:43.614 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:43.614 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:43.614 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:43.614 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 883346 00:29:43.614 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 883346 ']' 00:29:43.614 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 883346 00:29:43.614 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:43.614 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:43.614 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 883346 00:29:43.614 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:43.614 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:43.614 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 883346' 00:29:43.614 killing process with pid 883346 00:29:43.614 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 883346 00:29:43.614 Received shutdown signal, test time was about 2.000000 seconds 00:29:43.614 00:29:43.614 Latency(us) 00:29:43.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.614 =================================================================================================================== 00:29:43.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:43.614 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 883346 00:29:43.874 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:43.874 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:43.874 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:43.874 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:43.874 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:43.874 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:43.874 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:43.874 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=884159 00:29:43.874 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 884159 /var/tmp/bperf.sock 00:29:43.874 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 884159 ']' 00:29:43.874 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:43.874 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:43.874 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:43.874 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:43.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:43.874 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:43.874 15:34:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:43.874 [2024-07-15 15:34:53.295340] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:43.874 [2024-07-15 15:34:53.295397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884159 ] 00:29:43.874 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.874 [2024-07-15 15:34:53.357010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.874 [2024-07-15 15:34:53.420774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.811 15:34:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:44.811 15:34:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:44.811 15:34:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:44.811 15:34:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:44.811 15:34:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:44.811 15:34:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:44.811 15:34:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:45.070 nvme0n1 00:29:45.070 15:34:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:45.070 15:34:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:45.331 Running I/O for 2 seconds... 00:29:47.240 00:29:47.240 Latency(us) 00:29:47.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.240 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:47.240 nvme0n1 : 2.01 21300.43 83.20 0.00 0.00 5996.67 3072.00 14854.83 00:29:47.240 =================================================================================================================== 00:29:47.240 Total : 21300.43 83.20 0.00 0.00 5996.67 3072.00 14854.83 00:29:47.240 0 00:29:47.240 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:47.240 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:47.240 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:47.240 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:47.240 | select(.opcode=="crc32c") 00:29:47.240 | "\(.module_name) \(.executed)"' 00:29:47.240 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:47.500 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:47.500 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:47.500 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:47.500 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:47.500 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 884159 00:29:47.500 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 884159 ']' 00:29:47.500 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 884159 00:29:47.500 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:47.500 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:47.500 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 884159 00:29:47.500 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:47.500 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:47.500 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 884159' 00:29:47.500 killing process with pid 884159 00:29:47.500 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 884159 00:29:47.500 Received shutdown signal, test time was about 2.000000 seconds 00:29:47.500 00:29:47.500 Latency(us) 00:29:47.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:47.500 =================================================================================================================== 00:29:47.500 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:47.500 15:34:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 884159 00:29:47.760 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:47.760 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:47.760 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:47.760 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:47.760 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:47.760 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:47.760 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:47.760 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=884861 00:29:47.760 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 884861 /var/tmp/bperf.sock 00:29:47.760 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 884861 ']' 00:29:47.760 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:47.760 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:47.760 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:47.760 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:47.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:47.760 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:47.760 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:47.760 [2024-07-15 15:34:57.180708] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:47.760 [2024-07-15 15:34:57.180770] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884861 ] 00:29:47.760 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:47.760 Zero copy mechanism will not be used. 00:29:47.760 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.760 [2024-07-15 15:34:57.245134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.760 [2024-07-15 15:34:57.307999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.337 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:48.337 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:48.337 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:48.337 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:48.337 15:34:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:48.598 15:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:48.598 15:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:48.859 nvme0n1 00:29:48.859 15:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:48.859 15:34:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:48.859 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:48.859 Zero copy mechanism will not be used. 00:29:48.859 Running I/O for 2 seconds... 00:29:51.401 00:29:51.401 Latency(us) 00:29:51.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.401 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:51.401 nvme0n1 : 2.00 5948.74 743.59 0.00 0.00 2684.08 1966.08 11468.80 00:29:51.401 =================================================================================================================== 00:29:51.401 Total : 5948.74 743.59 0.00 0.00 2684.08 1966.08 11468.80 00:29:51.401 0 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:51.401 | select(.opcode=="crc32c") 00:29:51.401 | "\(.module_name) \(.executed)"' 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 884861 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 884861 ']' 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 884861 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 884861 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 884861' 00:29:51.401 killing process with pid 884861 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 884861 00:29:51.401 Received shutdown signal, test time was about 2.000000 seconds 00:29:51.401 00:29:51.401 Latency(us) 00:29:51.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.401 =================================================================================================================== 00:29:51.401 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 884861 00:29:51.401 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 882463 00:29:51.402 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 882463 ']' 00:29:51.402 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 882463 00:29:51.402 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:51.402 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:51.402 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 882463 00:29:51.402 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:51.402 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:51.402 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 882463' 00:29:51.402 killing process with pid 882463 00:29:51.402 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 882463 00:29:51.402 15:35:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 882463 00:29:51.663 00:29:51.663 real 0m16.438s 00:29:51.663 user 0m32.201s 00:29:51.663 sys 0m3.416s 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:51.663 ************************************ 00:29:51.663 END TEST nvmf_digest_clean 00:29:51.663 ************************************ 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:51.663 ************************************ 00:29:51.663 START TEST nvmf_digest_error 00:29:51.663 ************************************ 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=885577 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 885577 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 885577 ']' 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:51.663 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:51.663 [2024-07-15 15:35:01.163898] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:51.663 [2024-07-15 15:35:01.163945] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.663 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.663 [2024-07-15 15:35:01.232918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.924 [2024-07-15 15:35:01.296960] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.924 [2024-07-15 15:35:01.296994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.924 [2024-07-15 15:35:01.297001] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.924 [2024-07-15 15:35:01.297007] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.924 [2024-07-15 15:35:01.297013] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.924 [2024-07-15 15:35:01.297036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.495 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:52.495 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:52.495 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:52.495 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:52.495 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:52.495 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.495 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:52.495 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.495 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:52.495 [2024-07-15 15:35:01.966957] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:52.495 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.495 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:52.495 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:52.495 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.495 15:35:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:52.495 null0 00:29:52.495 [2024-07-15 15:35:02.047786] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.495 [2024-07-15 15:35:02.071990] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.495 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.495 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:52.495 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:52.495 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:52.495 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:52.495 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:52.495 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=885921 00:29:52.495 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 885921 /var/tmp/bperf.sock 00:29:52.495 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 885921 ']' 00:29:52.495 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:52.495 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:52.495 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:52.495 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:52.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:52.495 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:52.495 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:52.755 [2024-07-15 15:35:02.124274] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:52.755 [2024-07-15 15:35:02.124325] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885921 ] 00:29:52.755 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.755 [2024-07-15 15:35:02.185486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.755 [2024-07-15 15:35:02.249351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.394 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:53.394 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:53.394 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:53.394 15:35:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:53.653 15:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:53.653 15:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.653 15:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:53.653 15:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.653 15:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:53.653 15:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:53.913 nvme0n1 00:29:53.913 15:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:53.913 15:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.913 15:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:53.913 15:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.913 15:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:53.913 15:35:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:53.913 Running I/O for 2 seconds... 00:29:53.913 [2024-07-15 15:35:03.419317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:53.913 [2024-07-15 15:35:03.419352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-15 15:35:03.419364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.913 [2024-07-15 15:35:03.434200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:53.913 [2024-07-15 15:35:03.434225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-15 15:35:03.434234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.913 [2024-07-15 15:35:03.450193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:53.913 [2024-07-15 15:35:03.450216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-15 15:35:03.450226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.913 [2024-07-15 15:35:03.465545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:53.913 [2024-07-15 15:35:03.465568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-15 15:35:03.465577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.913 [2024-07-15 15:35:03.477143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:53.913 [2024-07-15 15:35:03.477164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-15 15:35:03.477173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.913 [2024-07-15 15:35:03.492708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:53.913 [2024-07-15 15:35:03.492730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-15 15:35:03.492739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.913 [2024-07-15 15:35:03.504748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:53.913 [2024-07-15 15:35:03.504770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-15 15:35:03.504779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.913 [2024-07-15 15:35:03.517579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:53.913 [2024-07-15 15:35:03.517601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-15 15:35:03.517609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:53.913 [2024-07-15 15:35:03.528546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:53.913 [2024-07-15 15:35:03.528567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.913 [2024-07-15 15:35:03.528577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.541925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.541947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.541956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.555257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.555278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.555287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.566200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.566221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.566229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.579893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.579915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.579924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.591741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.591763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.591771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.605200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.605222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.605230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.617488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.617509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.617518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.629139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.629161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.629174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.643536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.643557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.643566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.655282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.655303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.655311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.668174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.668196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.668205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.681102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.681123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.681132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.693311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.693332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.693341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.706612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.706633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.706642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.717871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.717897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.717906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.730751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.730773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.730781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.747448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.747470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.747479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.763675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.763696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.763705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.774873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.774899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.774908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.172 [2024-07-15 15:35:03.788537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.172 [2024-07-15 15:35:03.788557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.172 [2024-07-15 15:35:03.788566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.432 [2024-07-15 15:35:03.802177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.432 [2024-07-15 15:35:03.802198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.432 [2024-07-15 15:35:03.802207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.432 [2024-07-15 15:35:03.817521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.432 [2024-07-15 15:35:03.817542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.432 [2024-07-15 15:35:03.817551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.432 [2024-07-15 15:35:03.829099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.432 [2024-07-15 15:35:03.829121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.432 [2024-07-15 15:35:03.829130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.432 [2024-07-15 15:35:03.845069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.432 [2024-07-15 15:35:03.845090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.432 [2024-07-15 15:35:03.845099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.432 [2024-07-15 15:35:03.859524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.432 [2024-07-15 15:35:03.859545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.432 [2024-07-15 15:35:03.859558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.432 [2024-07-15 15:35:03.870457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.432 [2024-07-15 15:35:03.870477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.432 [2024-07-15 15:35:03.870486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.432 [2024-07-15 15:35:03.887028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.432 [2024-07-15 15:35:03.887049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.432 [2024-07-15 15:35:03.887058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.432 [2024-07-15 15:35:03.902936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.432 [2024-07-15 15:35:03.902957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.432 [2024-07-15 15:35:03.902965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.432 [2024-07-15 15:35:03.914303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.432 [2024-07-15 15:35:03.914324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.432 [2024-07-15 15:35:03.914332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.432 [2024-07-15 15:35:03.930101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.432 [2024-07-15 15:35:03.930122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.432 [2024-07-15 15:35:03.930131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.432 [2024-07-15 15:35:03.945776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.432 [2024-07-15 15:35:03.945796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.432 [2024-07-15 15:35:03.945804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.432 [2024-07-15 15:35:03.957568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.432 [2024-07-15 15:35:03.957588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.432 [2024-07-15 15:35:03.957597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.432 [2024-07-15 15:35:03.973282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.432 [2024-07-15 15:35:03.973303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.432 [2024-07-15 15:35:03.973312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.432 [2024-07-15 15:35:03.986099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.432 [2024-07-15 15:35:03.986123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.432 [2024-07-15 15:35:03.986132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.433 [2024-07-15 15:35:03.998130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.433 [2024-07-15 15:35:03.998150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.433 [2024-07-15 15:35:03.998159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.433 [2024-07-15 15:35:04.009061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.433 [2024-07-15 15:35:04.009082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.433 [2024-07-15 15:35:04.009091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.433 [2024-07-15 15:35:04.021962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.433 [2024-07-15 15:35:04.021982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.433 [2024-07-15 15:35:04.021990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.433 [2024-07-15 15:35:04.034695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.433 [2024-07-15 15:35:04.034716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.433 [2024-07-15 15:35:04.034725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.433 [2024-07-15 15:35:04.045459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.433 [2024-07-15 15:35:04.045481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.433 [2024-07-15 15:35:04.045489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.060395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.060416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.060425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.076353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.076374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.076382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.088639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.088659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.088668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.104484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.104505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.104514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.116522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.116543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.116551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.128603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.128625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.128633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.140171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.140192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.140201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.153698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.153718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.153727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.165349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.165370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.165378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.178838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.178859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.178867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.190468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.190489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.190497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.204596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.204617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.204629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.216808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.216828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.216837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.228981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.229002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.229010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.240921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.240942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.240951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.253069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.253089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.253098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.265568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.265589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.265597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.277764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.277785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.277794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.289765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.289786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.289794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.693 [2024-07-15 15:35:04.302421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.693 [2024-07-15 15:35:04.302442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.693 [2024-07-15 15:35:04.302451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.314843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.314867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.314875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.325359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.325379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.325388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.338842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.338863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.338871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.351781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.351802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.351810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.364911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.364931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.364939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.380449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.380470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.380478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.396390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.396411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.396419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.412166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.412187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.412195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.422931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.422951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.422959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.437134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.437154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.437163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.450736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.450757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.450766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.463267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.463288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.463297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.475257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.475278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.475287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.487605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.487626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.487634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.499587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.499607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.499616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.512447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.512468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.512476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.522931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.522951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.522959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.536430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.536450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.536462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.549435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.549455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.549464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:54.953 [2024-07-15 15:35:04.560662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:54.953 [2024-07-15 15:35:04.560682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:54.953 [2024-07-15 15:35:04.560691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.575829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.575850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.575858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.591009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.591030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.591038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.601846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.601867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.601876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.614458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.614478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.614487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.626804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.626825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.626834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.640010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.640030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.640038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.652825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.652845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.652854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.664171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.664192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.664200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.680464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.680486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.680495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.695056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.695078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.695086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.707182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.707202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.707211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.721376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.721397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.721405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.733182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.733202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.733211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.745562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.745582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.745590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.758538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.758559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.758570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.769400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.769421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.769429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.784329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.784350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.784359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.797519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.797540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.797548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.809733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.809753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.809762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.214 [2024-07-15 15:35:04.821270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.214 [2024-07-15 15:35:04.821291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.214 [2024-07-15 15:35:04.821299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:04.833607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:04.833628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:04.833637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:04.845023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:04.845044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:04.845053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:04.858467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:04.858488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:04.858496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:04.872790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:04.872819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:04.872827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:04.883536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:04.883557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:04.883565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:04.896549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:04.896570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:04.896579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:04.909131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:04.909152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:04.909161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:04.922186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:04.922206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:04.922214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:04.933463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:04.933484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:04.933492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:04.945600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:04.945621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:04.945630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:04.959077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:04.959098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:04.959107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:04.970187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:04.970207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:04.970216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:04.985027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:04.985048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:04.985057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:04.998032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:04.998054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:04.998062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:05.010345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:05.010366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:05.010374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:05.021576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:05.021597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:05.021606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:05.037071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:05.037092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:05.037100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:05.048795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:05.048816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:05.048824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:05.061048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:05.061069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:05.061077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:05.077379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:05.077400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:05.077409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.474 [2024-07-15 15:35:05.088804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.474 [2024-07-15 15:35:05.088825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.474 [2024-07-15 15:35:05.088837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.100533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.100554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.100563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.114305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.114327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.114335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.125962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.125983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.125991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.139593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.139614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.139623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.152046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.152068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.152078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.163725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.163746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.163755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.178408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.178429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.178438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.190033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.190054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.190062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.205495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.205520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.205529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.216170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.216191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.216200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.228766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.228787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.228795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.241393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.241414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.241423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.254129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.254150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.254159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.266303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.266324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.266333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.277502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.277523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.277532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.293050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.293071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.293080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.309086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.309107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.309115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.319996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.320017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.320026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.332133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.332155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.332164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.734 [2024-07-15 15:35:05.346149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.734 [2024-07-15 15:35:05.346170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.734 [2024-07-15 15:35:05.346179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.994 [2024-07-15 15:35:05.362360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.994 [2024-07-15 15:35:05.362381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.994 [2024-07-15 15:35:05.362390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.994 [2024-07-15 15:35:05.373900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.994 [2024-07-15 15:35:05.373920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.994 [2024-07-15 15:35:05.373929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.994 [2024-07-15 15:35:05.387245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.994 [2024-07-15 15:35:05.387265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.994 [2024-07-15 15:35:05.387274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.994 [2024-07-15 15:35:05.399291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61eaf0) 00:29:55.994 [2024-07-15 15:35:05.399311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.994 [2024-07-15 15:35:05.399320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:55.994 00:29:55.994 Latency(us) 00:29:55.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.994 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:55.994 nvme0n1 : 2.00 19593.13 76.54 0.00 0.00 6523.97 3426.99 17803.95 00:29:55.994 =================================================================================================================== 00:29:55.994 Total : 19593.13 76.54 0.00 0.00 6523.97 3426.99 17803.95 00:29:55.994 0 00:29:55.994 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:55.994 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:55.994 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:55.994 | .driver_specific 00:29:55.994 | .nvme_error 00:29:55.994 | .status_code 00:29:55.994 | .command_transient_transport_error' 00:29:55.994 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:55.994 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 153 > 0 )) 00:29:55.994 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 885921 00:29:55.994 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 885921 ']' 00:29:55.994 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 885921 00:29:55.994 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:55.994 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:55.994 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 885921 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 885921' 00:29:56.253 killing process with pid 885921 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 885921 00:29:56.253 Received shutdown signal, test time was about 2.000000 seconds 00:29:56.253 00:29:56.253 Latency(us) 00:29:56.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.253 =================================================================================================================== 00:29:56.253 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 885921 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=886609 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 886609 /var/tmp/bperf.sock 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 886609 ']' 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:56.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:56.253 15:35:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:56.253 [2024-07-15 15:35:05.835524] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:29:56.253 [2024-07-15 15:35:05.835582] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886609 ] 00:29:56.253 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:56.253 Zero copy mechanism will not be used. 00:29:56.253 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.512 [2024-07-15 15:35:05.896849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.512 [2024-07-15 15:35:05.960078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.081 15:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:57.081 15:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:57.081 15:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:57.081 15:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:57.341 15:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:57.341 15:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.341 15:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:57.341 15:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.341 15:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:57.341 15:35:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:57.601 nvme0n1 00:29:57.601 15:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:57.601 15:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:57.601 15:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:57.601 15:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:57.601 15:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:57.601 15:35:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:57.601 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:57.601 Zero copy mechanism will not be used. 00:29:57.601 Running I/O for 2 seconds... 00:29:57.862 [2024-07-15 15:35:07.223442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.223481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.223492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.235092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.235117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.235127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.245322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.245350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.245359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.256268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.256293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.256303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.264596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.264618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.264626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.275415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.275437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.275445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.284611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.284633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.284641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.293704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.293726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.293735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.302426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.302448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.302457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.312453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.312479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.312488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.322005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.322027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.322036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.332880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.332911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.332920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.341618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.341640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.341649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.351509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.351530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.351539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.361291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.361313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.361321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.372521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.372547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.372557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.379839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.379861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.379870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.390299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.390324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.390333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.398703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.398725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.398733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.409184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.409209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.409222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.419435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.419457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.419465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.430388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.430411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.430423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.438658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.438679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.438687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.448896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.448918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.448926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.457640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.862 [2024-07-15 15:35:07.457661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.862 [2024-07-15 15:35:07.457670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:57.862 [2024-07-15 15:35:07.467462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.863 [2024-07-15 15:35:07.467487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.863 [2024-07-15 15:35:07.467498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:57.863 [2024-07-15 15:35:07.476006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:57.863 [2024-07-15 15:35:07.476028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.863 [2024-07-15 15:35:07.476036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.486230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.486251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.486260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.497173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.497194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.497203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.506311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.506332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.506340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.516259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.516280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.516288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.524394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.524415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.524424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.534431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.534453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.534461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.542929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.542951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.542959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.552940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.552962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.552970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.560411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.560434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.560442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.570563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.570585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.570598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.576439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.576461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.576469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.583533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.583556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.583565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.594303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.594326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.594334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.603690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.603712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.603721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.612264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.612286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.612295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.621263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.621286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.621295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.629914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.629936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.629945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.638142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.638165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.638174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.647569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.647594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.647602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.655573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.655595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.655603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.663411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.663433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.663442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.673733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.673754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.673763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.683982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.124 [2024-07-15 15:35:07.684005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.124 [2024-07-15 15:35:07.684014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.124 [2024-07-15 15:35:07.692911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.125 [2024-07-15 15:35:07.692933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.125 [2024-07-15 15:35:07.692941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.125 [2024-07-15 15:35:07.702769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.125 [2024-07-15 15:35:07.702790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.125 [2024-07-15 15:35:07.702799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.125 [2024-07-15 15:35:07.711193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.125 [2024-07-15 15:35:07.711216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.125 [2024-07-15 15:35:07.711225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.125 [2024-07-15 15:35:07.719792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.125 [2024-07-15 15:35:07.719814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.125 [2024-07-15 15:35:07.719823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.125 [2024-07-15 15:35:07.728979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.125 [2024-07-15 15:35:07.729001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.125 [2024-07-15 15:35:07.729009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.125 [2024-07-15 15:35:07.738800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.125 [2024-07-15 15:35:07.738823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.125 [2024-07-15 15:35:07.738831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.748036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.748059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.748069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.757673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.757696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.757704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.765906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.765928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.765937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.774526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.774548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.774557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.783606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.783629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.783637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.792512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.792535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.792544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.801832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.801855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.801868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.810721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.810743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.810751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.821658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.821680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.821688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.830003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.830025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.830034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.840686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.840709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.840717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.850234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.850257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.850266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.858876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.858904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.858913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.867109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.867135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.867143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.875019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.875042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.875050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.884146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.884172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.884180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.891766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.891789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.891797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.901475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.901497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.901505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.910889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.910911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.910919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.920094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.920117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.920125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.928872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.928901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.928909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.386 [2024-07-15 15:35:07.938770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.386 [2024-07-15 15:35:07.938793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.386 [2024-07-15 15:35:07.938802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.387 [2024-07-15 15:35:07.947359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.387 [2024-07-15 15:35:07.947382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.387 [2024-07-15 15:35:07.947390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.387 [2024-07-15 15:35:07.956083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.387 [2024-07-15 15:35:07.956106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.387 [2024-07-15 15:35:07.956114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.387 [2024-07-15 15:35:07.966143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.387 [2024-07-15 15:35:07.966166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.387 [2024-07-15 15:35:07.966174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.387 [2024-07-15 15:35:07.975018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.387 [2024-07-15 15:35:07.975041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.387 [2024-07-15 15:35:07.975049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.387 [2024-07-15 15:35:07.986626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.387 [2024-07-15 15:35:07.986649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.387 [2024-07-15 15:35:07.986658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.387 [2024-07-15 15:35:07.993177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.387 [2024-07-15 15:35:07.993200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.387 [2024-07-15 15:35:07.993208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.387 [2024-07-15 15:35:08.001186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.387 [2024-07-15 15:35:08.001209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.387 [2024-07-15 15:35:08.001218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.010662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.010685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.010694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.018866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.018893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.018902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.025274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.025297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.025305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.035290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.035312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.035325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.044900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.044922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.044931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.052442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.052465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.052473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.058293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.058315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.058323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.063255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.063278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.063286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.073463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.073486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.073494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.082040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.082062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.082070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.089993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.090013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.090022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.099004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.099026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.099035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.108305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.108330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.108339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.118746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.118767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.118776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.129668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.129690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.129698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.139574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.139597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.139605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.149935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.149956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.149964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.160486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.160509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.160518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.169384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.169405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.169414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.178107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.178130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.178138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.187383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.187404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.187413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.197035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.197056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.197064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.207271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.207292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.207301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.217112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.217134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.217142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.225142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.225163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.225172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.233983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.234005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.234013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.242932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.242952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.242961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.251993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.648 [2024-07-15 15:35:08.252014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.648 [2024-07-15 15:35:08.252022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.648 [2024-07-15 15:35:08.260286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.649 [2024-07-15 15:35:08.260307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.649 [2024-07-15 15:35:08.260315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.909 [2024-07-15 15:35:08.269021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.909 [2024-07-15 15:35:08.269042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.909 [2024-07-15 15:35:08.269054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.909 [2024-07-15 15:35:08.277809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.909 [2024-07-15 15:35:08.277832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.909 [2024-07-15 15:35:08.277840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.909 [2024-07-15 15:35:08.288821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.909 [2024-07-15 15:35:08.288843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.909 [2024-07-15 15:35:08.288851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.909 [2024-07-15 15:35:08.297686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.909 [2024-07-15 15:35:08.297707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.909 [2024-07-15 15:35:08.297716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.909 [2024-07-15 15:35:08.304552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.909 [2024-07-15 15:35:08.304574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.909 [2024-07-15 15:35:08.304583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.909 [2024-07-15 15:35:08.310765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.909 [2024-07-15 15:35:08.310786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.909 [2024-07-15 15:35:08.310795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.909 [2024-07-15 15:35:08.316869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.909 [2024-07-15 15:35:08.316895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.909 [2024-07-15 15:35:08.316904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.909 [2024-07-15 15:35:08.322850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.909 [2024-07-15 15:35:08.322870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.909 [2024-07-15 15:35:08.322878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.909 [2024-07-15 15:35:08.331301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.909 [2024-07-15 15:35:08.331324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.909 [2024-07-15 15:35:08.331332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.340929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.340954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.340963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.351624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.351646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.351655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.361177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.361199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.361207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.371383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.371405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.371413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.381279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.381300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.381308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.391229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.391250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.391259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.399563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.399586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.399594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.407088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.407119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.407127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.416706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.416729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.416738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.428218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.428239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.428248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.439445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.439468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.439476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.447977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.447999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.448007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.457278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.457299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.457308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.465430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.465451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.465460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.475334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.475355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.475364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.484340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.484361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.484369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.494642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.494664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.494672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.504941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.504962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.504975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.515233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.515255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.515264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:58.910 [2024-07-15 15:35:08.522699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:58.910 [2024-07-15 15:35:08.522721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.910 [2024-07-15 15:35:08.522730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.531510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.531532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.531541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.539494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.539517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.539526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.548745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.548769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.548777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.557959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.557980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.557989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.566168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.566190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.566199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.575800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.575822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.575831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.584397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.584423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.584431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.594228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.594250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.594258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.602941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.602964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.602972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.609414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.609437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.609446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.618547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.618569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.618578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.626901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.626923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.626931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.637488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.637510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.637518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.648210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.648233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.648241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.655866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.655894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.655903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.663099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.663121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.663129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.671528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.671551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.671559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.682332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.682354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.682362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.688902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.688924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.688932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.699212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.699234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.699243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.708593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.708616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.708624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.718113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.718136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.171 [2024-07-15 15:35:08.718144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.171 [2024-07-15 15:35:08.729543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.171 [2024-07-15 15:35:08.729566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.172 [2024-07-15 15:35:08.729574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.172 [2024-07-15 15:35:08.739601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.172 [2024-07-15 15:35:08.739623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.172 [2024-07-15 15:35:08.739635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.172 [2024-07-15 15:35:08.749198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.172 [2024-07-15 15:35:08.749221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.172 [2024-07-15 15:35:08.749229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.172 [2024-07-15 15:35:08.758508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.172 [2024-07-15 15:35:08.758531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.172 [2024-07-15 15:35:08.758539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.172 [2024-07-15 15:35:08.768924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.172 [2024-07-15 15:35:08.768946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.172 [2024-07-15 15:35:08.768955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.172 [2024-07-15 15:35:08.775925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.172 [2024-07-15 15:35:08.775948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.172 [2024-07-15 15:35:08.775956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.172 [2024-07-15 15:35:08.784849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.172 [2024-07-15 15:35:08.784871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.172 [2024-07-15 15:35:08.784879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.432 [2024-07-15 15:35:08.793704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.432 [2024-07-15 15:35:08.793727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.432 [2024-07-15 15:35:08.793735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.432 [2024-07-15 15:35:08.804987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.805009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.805018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.815705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.815728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.815736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.825199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.825222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.825230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.835029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.835050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.835059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.843952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.843975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.843983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.851456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.851478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.851486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.861974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.861997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.862005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.873134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.873156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.873164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.882918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.882940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.882949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.892763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.892786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.892794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.902253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.902277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.902289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.913426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.913448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.913457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.922819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.922842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.922851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.932792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.932815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.932823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.941438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.941461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.941470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.951585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.951608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.951617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.961038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.961061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.961070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.971848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.971871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.971879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.980815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.980838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.980846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:08.992240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:08.992266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:08.992274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:09.001306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:09.001329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:09.001337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:09.009374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:09.009397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:09.009405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:09.019510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:09.019533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:09.019542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:09.031844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:09.031867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:09.031875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:09.041142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:09.041165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:09.041173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.433 [2024-07-15 15:35:09.051098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.433 [2024-07-15 15:35:09.051121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.433 [2024-07-15 15:35:09.051129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.694 [2024-07-15 15:35:09.060951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.694 [2024-07-15 15:35:09.060975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-07-15 15:35:09.060984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.694 [2024-07-15 15:35:09.071827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.694 [2024-07-15 15:35:09.071849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-07-15 15:35:09.071858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.694 [2024-07-15 15:35:09.081989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.694 [2024-07-15 15:35:09.082011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-07-15 15:35:09.082019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.694 [2024-07-15 15:35:09.092941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.694 [2024-07-15 15:35:09.092964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-07-15 15:35:09.092972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.694 [2024-07-15 15:35:09.102192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.694 [2024-07-15 15:35:09.102215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-07-15 15:35:09.102223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.694 [2024-07-15 15:35:09.112364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.694 [2024-07-15 15:35:09.112387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.694 [2024-07-15 15:35:09.112396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.694 [2024-07-15 15:35:09.118890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.695 [2024-07-15 15:35:09.118911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-07-15 15:35:09.118919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.695 [2024-07-15 15:35:09.127916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.695 [2024-07-15 15:35:09.127939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-07-15 15:35:09.127947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.695 [2024-07-15 15:35:09.138755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.695 [2024-07-15 15:35:09.138777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-07-15 15:35:09.138786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.695 [2024-07-15 15:35:09.150396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.695 [2024-07-15 15:35:09.150419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-07-15 15:35:09.150428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.695 [2024-07-15 15:35:09.160322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.695 [2024-07-15 15:35:09.160345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-07-15 15:35:09.160357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.695 [2024-07-15 15:35:09.170385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.695 [2024-07-15 15:35:09.170409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-07-15 15:35:09.170418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.695 [2024-07-15 15:35:09.180181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.695 [2024-07-15 15:35:09.180204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-07-15 15:35:09.180212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.695 [2024-07-15 15:35:09.190283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.695 [2024-07-15 15:35:09.190305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-07-15 15:35:09.190314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:59.695 [2024-07-15 15:35:09.201047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.695 [2024-07-15 15:35:09.201070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-07-15 15:35:09.201079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:59.695 [2024-07-15 15:35:09.211180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f74b0) 00:29:59.695 [2024-07-15 15:35:09.211203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.695 [2024-07-15 15:35:09.211211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:59.695 00:29:59.695 Latency(us) 00:29:59.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.695 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:59.695 nvme0n1 : 2.00 3331.46 416.43 0.00 0.00 4796.82 771.41 12724.91 00:29:59.695 =================================================================================================================== 00:29:59.695 Total : 3331.46 416.43 0.00 0.00 4796.82 771.41 12724.91 00:29:59.695 0 00:29:59.695 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:59.695 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:59.695 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:59.695 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:59.695 | .driver_specific 00:29:59.695 | .nvme_error 00:29:59.695 | .status_code 00:29:59.695 | .command_transient_transport_error' 00:29:59.955 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 215 > 0 )) 00:29:59.955 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 886609 00:29:59.955 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 886609 ']' 00:29:59.955 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 886609 00:29:59.955 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:59.955 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:59.955 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 886609 00:29:59.955 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:59.955 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:59.955 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 886609' 00:29:59.955 killing process with pid 886609 00:29:59.955 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 886609 00:29:59.955 Received shutdown signal, test time was about 2.000000 seconds 00:29:59.955 00:29:59.955 Latency(us) 00:29:59.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.955 =================================================================================================================== 00:29:59.955 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:59.955 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 886609 00:30:00.216 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:00.216 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:00.216 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:00.216 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:00.216 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:00.216 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=887286 00:30:00.216 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 887286 /var/tmp/bperf.sock 00:30:00.216 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 887286 ']' 00:30:00.216 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:00.216 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:00.216 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:00.216 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:00.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:00.216 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:00.216 15:35:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:00.216 [2024-07-15 15:35:09.628263] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:30:00.216 [2024-07-15 15:35:09.628321] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid887286 ] 00:30:00.216 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.216 [2024-07-15 15:35:09.689687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.216 [2024-07-15 15:35:09.753907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.786 15:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:00.786 15:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:30:00.786 15:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:00.786 15:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:01.046 15:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:01.046 15:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.046 15:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:01.046 15:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.046 15:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:01.046 15:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:01.306 nvme0n1 00:30:01.306 15:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:01.306 15:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.306 15:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:01.306 15:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.306 15:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:01.306 15:35:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:01.566 Running I/O for 2 seconds... 00:30:01.566 [2024-07-15 15:35:10.958812] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190f81e0 00:30:01.566 [2024-07-15 15:35:10.959792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:10.959825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:10.971086] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190f7100 00:30:01.566 [2024-07-15 15:35:10.972051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:10.972073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:10.983417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190f7da8 00:30:01.566 [2024-07-15 15:35:10.984529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:10.984549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:10.995447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190f8e88 00:30:01.566 [2024-07-15 15:35:10.996571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:10.996591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:11.007292] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190f9f68 00:30:01.566 [2024-07-15 15:35:11.008416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:11.008441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:11.019123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190fb048 00:30:01.566 [2024-07-15 15:35:11.020251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:11.020270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:11.030958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190eee38 00:30:01.566 [2024-07-15 15:35:11.032098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:11.032117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:11.042755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190edd58 00:30:01.566 [2024-07-15 15:35:11.043876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:11.043901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:11.054554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ecc78 00:30:01.566 [2024-07-15 15:35:11.055676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:11.055695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:11.065592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190e0630 00:30:01.566 [2024-07-15 15:35:11.066689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:11.066708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:11.078942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190e8088 00:30:01.566 [2024-07-15 15:35:11.080108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:11.080127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:11.091096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190e3d08 00:30:01.566 [2024-07-15 15:35:11.092522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:11.092541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:11.103315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190f9f68 00:30:01.566 [2024-07-15 15:35:11.104888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:11.104907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:11.114310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190fa3a0 00:30:01.566 [2024-07-15 15:35:11.115743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:11.115761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:11.124783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ebfd0 00:30:01.566 [2024-07-15 15:35:11.125789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:11.125808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:11.136008] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190f1430 00:30:01.566 [2024-07-15 15:35:11.136957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:11.136976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:11.149289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190f20d8 00:30:01.566 [2024-07-15 15:35:11.150464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:11.150482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:11.161112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190e3d08 00:30:01.566 [2024-07-15 15:35:11.162287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:11.162306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.566 [2024-07-15 15:35:11.173048] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190e6fa8 00:30:01.566 [2024-07-15 15:35:11.174220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.566 [2024-07-15 15:35:11.174240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.826 [2024-07-15 15:35:11.186413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190f35f0 00:30:01.826 [2024-07-15 15:35:11.188205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.826 [2024-07-15 15:35:11.188224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:01.826 [2024-07-15 15:35:11.197563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190f92c0 00:30:01.826 [2024-07-15 15:35:11.198954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.826 [2024-07-15 15:35:11.198973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:01.826 [2024-07-15 15:35:11.208861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.826 [2024-07-15 15:35:11.209155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.826 [2024-07-15 15:35:11.209174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.826 [2024-07-15 15:35:11.220997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.826 [2024-07-15 15:35:11.221301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.826 [2024-07-15 15:35:11.221321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.826 [2024-07-15 15:35:11.233093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.826 [2024-07-15 15:35:11.233362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.826 [2024-07-15 15:35:11.233381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.826 [2024-07-15 15:35:11.245207] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.826 [2024-07-15 15:35:11.245494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.826 [2024-07-15 15:35:11.245513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.826 [2024-07-15 15:35:11.257301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.826 [2024-07-15 15:35:11.257604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.826 [2024-07-15 15:35:11.257623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.826 [2024-07-15 15:35:11.269415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.826 [2024-07-15 15:35:11.269720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.826 [2024-07-15 15:35:11.269739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.826 [2024-07-15 15:35:11.281522] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.826 [2024-07-15 15:35:11.281789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.826 [2024-07-15 15:35:11.281808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.826 [2024-07-15 15:35:11.293635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.826 [2024-07-15 15:35:11.293933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.826 [2024-07-15 15:35:11.293952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.826 [2024-07-15 15:35:11.305767] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.826 [2024-07-15 15:35:11.306063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.826 [2024-07-15 15:35:11.306082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.826 [2024-07-15 15:35:11.317873] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.826 [2024-07-15 15:35:11.318177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.827 [2024-07-15 15:35:11.318199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.827 [2024-07-15 15:35:11.329970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.827 [2024-07-15 15:35:11.330263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.827 [2024-07-15 15:35:11.330282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.827 [2024-07-15 15:35:11.342080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.827 [2024-07-15 15:35:11.342363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.827 [2024-07-15 15:35:11.342382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.827 [2024-07-15 15:35:11.354176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.827 [2024-07-15 15:35:11.354474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.827 [2024-07-15 15:35:11.354493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.827 [2024-07-15 15:35:11.366297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.827 [2024-07-15 15:35:11.366591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.827 [2024-07-15 15:35:11.366610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.827 [2024-07-15 15:35:11.378432] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.827 [2024-07-15 15:35:11.378611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.827 [2024-07-15 15:35:11.378629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.827 [2024-07-15 15:35:11.390529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.827 [2024-07-15 15:35:11.390833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.827 [2024-07-15 15:35:11.390851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.827 [2024-07-15 15:35:11.402637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.827 [2024-07-15 15:35:11.402952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.827 [2024-07-15 15:35:11.402971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.827 [2024-07-15 15:35:11.414771] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.827 [2024-07-15 15:35:11.415060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.827 [2024-07-15 15:35:11.415079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.827 [2024-07-15 15:35:11.426957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.827 [2024-07-15 15:35:11.427266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.827 [2024-07-15 15:35:11.427285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:01.827 [2024-07-15 15:35:11.439094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:01.827 [2024-07-15 15:35:11.439387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:01.827 [2024-07-15 15:35:11.439405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.087 [2024-07-15 15:35:11.451183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.451502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.451521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.463313] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.463605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.463623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.475428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.475745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.475763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.487576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.487876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.487899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.499671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.499981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.499999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.511785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.512081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.512100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.523876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.524189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.524207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.536002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.536313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.536332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.548094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.548398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.548417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.560206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.560516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.560535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.572307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.572590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.572609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.584437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.584722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.584741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.596505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.596796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.596815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.608641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.608956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.608976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.620717] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.621095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.621114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.632848] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.633139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.633161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.644918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.645230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.645249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.657051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.657354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.657373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.669129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.669413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.669432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.681274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.681553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.681571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.693332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.693616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.693635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.088 [2024-07-15 15:35:11.705470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.088 [2024-07-15 15:35:11.705767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.088 [2024-07-15 15:35:11.705786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.717561] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.717872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.717896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.729690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.730013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.730032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.741791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.742108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.742127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.753926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.754214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.754233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.766030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.766356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.766375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.778142] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.778424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.778443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.790281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.790595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.790614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.802413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.802758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.802777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.814547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.814869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.814891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.826674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.826970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.826989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.838785] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.839104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.839123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.850907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.851208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.851227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.863016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.863292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.863311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.875154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.875462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.875481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.887271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.887547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.887566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.899398] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.899685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.899704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.911506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.911806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.911825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.923645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.923929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.349 [2024-07-15 15:35:11.923948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.349 [2024-07-15 15:35:11.935796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.349 [2024-07-15 15:35:11.936082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.350 [2024-07-15 15:35:11.936101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.350 [2024-07-15 15:35:11.947929] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.350 [2024-07-15 15:35:11.948242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.350 [2024-07-15 15:35:11.948263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.350 [2024-07-15 15:35:11.960022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.350 [2024-07-15 15:35:11.960322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.350 [2024-07-15 15:35:11.960340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.610 [2024-07-15 15:35:11.972167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.610 [2024-07-15 15:35:11.972481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.610 [2024-07-15 15:35:11.972500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.610 [2024-07-15 15:35:11.984275] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.610 [2024-07-15 15:35:11.984570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.610 [2024-07-15 15:35:11.984589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.610 [2024-07-15 15:35:11.996431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.610 [2024-07-15 15:35:11.996740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.610 [2024-07-15 15:35:11.996759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.610 [2024-07-15 15:35:12.008547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.610 [2024-07-15 15:35:12.008846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.610 [2024-07-15 15:35:12.008865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.610 [2024-07-15 15:35:12.020678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.020989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.021009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.032793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.033108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.033127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.044932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.045238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.045257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.057049] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.057350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.057369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.069181] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.069445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.069464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.081312] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.081594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.081613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.093431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.093719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.093738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.105700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.105878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.105902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.117826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.118008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.118027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.129968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.130286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.130305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.142258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.142567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.142586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.154389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.154683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.154701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.166497] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.166785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.166804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.178636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.178921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.178940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.190816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.191104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.191123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.202988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.203308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.203327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.215084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.215378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.215397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.611 [2024-07-15 15:35:12.227224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.611 [2024-07-15 15:35:12.227544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.611 [2024-07-15 15:35:12.227563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.239356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.239631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.239650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.251455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.251743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.251762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.263577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.263852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.263875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.275693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.276023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.276042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.287821] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.288135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.288154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.299925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.300235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.300253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.312054] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.312362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.312381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.324154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.324477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.324496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.336284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.336463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.336481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.348377] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.348719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.348738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.360498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.360810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.360829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.372609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.372895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.372914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.384732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.385047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.385066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.396829] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.397120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.397139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.408968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.409288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.409307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.421080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.421356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.421374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.433213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.433511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.433530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.445311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.445626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.445645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.457436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.457724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.457742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.469552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.872 [2024-07-15 15:35:12.469866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.872 [2024-07-15 15:35:12.469888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:02.872 [2024-07-15 15:35:12.481670] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:02.873 [2024-07-15 15:35:12.481950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:02.873 [2024-07-15 15:35:12.481969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.493798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.494110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.494128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.505940] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.506222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.506241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.518037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.518326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.518345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.530172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.530491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.530510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.542266] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.542627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.542646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.554404] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.554717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.554736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.566498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.566786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.566804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.578649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.578943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.578965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.590761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.591072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.591091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.602880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.603208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.603227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.615015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.615318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.615336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.627137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.627423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.627441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.639246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.639520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.639538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.651361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.651696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.651714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.663463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.663812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.663830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.675605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.675888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.675908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.687690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.688005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.688025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.699859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.700158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.700177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.711945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.712249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.712269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.724106] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.724405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.724424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.736171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.736465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.736490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.133 [2024-07-15 15:35:12.748325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.133 [2024-07-15 15:35:12.748622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.133 [2024-07-15 15:35:12.748641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.393 [2024-07-15 15:35:12.760415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.393 [2024-07-15 15:35:12.760696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.393 [2024-07-15 15:35:12.760715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.393 [2024-07-15 15:35:12.772566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.393 [2024-07-15 15:35:12.772871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.393 [2024-07-15 15:35:12.772893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.393 [2024-07-15 15:35:12.784664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.393 [2024-07-15 15:35:12.784970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.393 [2024-07-15 15:35:12.784989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.393 [2024-07-15 15:35:12.796787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.393 [2024-07-15 15:35:12.797081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.393 [2024-07-15 15:35:12.797100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.393 [2024-07-15 15:35:12.808874] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.393 [2024-07-15 15:35:12.809157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.393 [2024-07-15 15:35:12.809176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.393 [2024-07-15 15:35:12.820988] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.393 [2024-07-15 15:35:12.821295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.393 [2024-07-15 15:35:12.821314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.393 [2024-07-15 15:35:12.833106] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.394 [2024-07-15 15:35:12.833389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.394 [2024-07-15 15:35:12.833408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.394 [2024-07-15 15:35:12.845227] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.394 [2024-07-15 15:35:12.845547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.394 [2024-07-15 15:35:12.845566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.394 [2024-07-15 15:35:12.857330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.394 [2024-07-15 15:35:12.857630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.394 [2024-07-15 15:35:12.857648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.394 [2024-07-15 15:35:12.869441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.394 [2024-07-15 15:35:12.869743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.394 [2024-07-15 15:35:12.869762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.394 [2024-07-15 15:35:12.881541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.394 [2024-07-15 15:35:12.881845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.394 [2024-07-15 15:35:12.881864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.394 [2024-07-15 15:35:12.893664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.394 [2024-07-15 15:35:12.893954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.394 [2024-07-15 15:35:12.893973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.394 [2024-07-15 15:35:12.905763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.394 [2024-07-15 15:35:12.906090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.394 [2024-07-15 15:35:12.906109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.394 [2024-07-15 15:35:12.917878] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.394 [2024-07-15 15:35:12.918193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.394 [2024-07-15 15:35:12.918212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.394 [2024-07-15 15:35:12.929993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.394 [2024-07-15 15:35:12.930289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.394 [2024-07-15 15:35:12.930307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.394 [2024-07-15 15:35:12.942108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a9a7e0) with pdu=0x2000190ec840 00:30:03.394 [2024-07-15 15:35:12.942390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:03.394 [2024-07-15 15:35:12.942410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:03.394 00:30:03.394 Latency(us) 00:30:03.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.394 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:03.394 nvme0n1 : 2.01 21073.22 82.32 0.00 0.00 6061.24 2880.85 14854.83 00:30:03.394 =================================================================================================================== 00:30:03.394 Total : 21073.22 82.32 0.00 0.00 6061.24 2880.85 14854.83 00:30:03.394 0 00:30:03.394 15:35:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:03.394 15:35:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:03.394 15:35:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:03.394 | .driver_specific 00:30:03.394 | .nvme_error 00:30:03.394 | .status_code 00:30:03.394 | .command_transient_transport_error' 00:30:03.394 15:35:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:03.653 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 165 > 0 )) 00:30:03.653 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 887286 00:30:03.653 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 887286 ']' 00:30:03.653 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 887286 00:30:03.653 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:30:03.653 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:03.653 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 887286 00:30:03.653 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:03.653 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:03.653 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 887286' 00:30:03.653 killing process with pid 887286 00:30:03.653 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 887286 00:30:03.653 Received shutdown signal, test time was about 2.000000 seconds 00:30:03.653 00:30:03.653 Latency(us) 00:30:03.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:03.653 =================================================================================================================== 00:30:03.653 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:03.653 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 887286 00:30:03.913 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:03.913 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:03.913 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:03.913 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:03.913 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:03.913 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=887977 00:30:03.913 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 887977 /var/tmp/bperf.sock 00:30:03.913 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 887977 ']' 00:30:03.913 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:03.913 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:03.913 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:03.913 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:03.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:03.913 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:03.913 15:35:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:03.913 [2024-07-15 15:35:13.374426] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:30:03.913 [2024-07-15 15:35:13.374497] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid887977 ] 00:30:03.913 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:03.913 Zero copy mechanism will not be used. 00:30:03.913 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.913 [2024-07-15 15:35:13.438440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.913 [2024-07-15 15:35:13.501735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.850 15:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:04.850 15:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:30:04.850 15:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:04.850 15:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:04.850 15:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:04.850 15:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.850 15:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:04.850 15:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.850 15:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:04.850 15:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:05.110 nvme0n1 00:30:05.110 15:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:05.110 15:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.110 15:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:05.110 15:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.110 15:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:05.110 15:35:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:05.110 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:05.110 Zero copy mechanism will not be used. 00:30:05.110 Running I/O for 2 seconds... 00:30:05.369 [2024-07-15 15:35:14.745920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.369 [2024-07-15 15:35:14.746295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.369 [2024-07-15 15:35:14.746328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.369 [2024-07-15 15:35:14.755324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.369 [2024-07-15 15:35:14.755710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.369 [2024-07-15 15:35:14.755734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.369 [2024-07-15 15:35:14.761954] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.369 [2024-07-15 15:35:14.762307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.369 [2024-07-15 15:35:14.762328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.369 [2024-07-15 15:35:14.770036] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.369 [2024-07-15 15:35:14.770430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.369 [2024-07-15 15:35:14.770451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.369 [2024-07-15 15:35:14.779693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.369 [2024-07-15 15:35:14.780103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.369 [2024-07-15 15:35:14.780123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.369 [2024-07-15 15:35:14.788113] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.369 [2024-07-15 15:35:14.788463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.369 [2024-07-15 15:35:14.788484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.369 [2024-07-15 15:35:14.797085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.369 [2024-07-15 15:35:14.797452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.797473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.804243] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.804624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.804645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.812160] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.812662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.812682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.819421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.819799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.819819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.825828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.826200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.826220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.833336] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.833701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.833721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.841391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.841770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.841790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.848633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.848996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.849016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.856649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.857023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.857043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.862984] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.863252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.863273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.871607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.871980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.872000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.881491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.881899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.881919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.888517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.888877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.888903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.895027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.895285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.895305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.900474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.900731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.900750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.908203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.908274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.908292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.915322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.915698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.915722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.923989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.924368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.924388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.931587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.931958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.931978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.939047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.939433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.939453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.945744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.946111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.946132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.953224] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.953609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.953629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.962160] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.962543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.962563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.970168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.970545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.970565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.978182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.978538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.978558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.370 [2024-07-15 15:35:14.987283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.370 [2024-07-15 15:35:14.987652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.370 [2024-07-15 15:35:14.987673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:14.994476] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:14.994744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:14.994764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.003938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.004195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.004214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.009256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.009633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.009653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.016608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.016977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.016997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.024807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.025196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.025216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.032083] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.032460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.032480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.038786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.039175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.039195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.045303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.045674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.045694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.051462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.051719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.051738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.057156] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.057411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.057431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.067030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.067408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.067428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.074756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.075141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.075162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.082690] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.082979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.082998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.089700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.089961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.089981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.097517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.097891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.097911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.108665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.109041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.109062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.115995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.116374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.116398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.124919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.125316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.125336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.131614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.131868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.131893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.138096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.138353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.138373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.146002] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.146546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.146567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.154090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.154356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.154375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.164204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.164582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.630 [2024-07-15 15:35:15.164602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.630 [2024-07-15 15:35:15.172802] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.630 [2024-07-15 15:35:15.173198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.631 [2024-07-15 15:35:15.173219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.631 [2024-07-15 15:35:15.184556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.631 [2024-07-15 15:35:15.184937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.631 [2024-07-15 15:35:15.184957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.631 [2024-07-15 15:35:15.196519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.631 [2024-07-15 15:35:15.196889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.631 [2024-07-15 15:35:15.196909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.631 [2024-07-15 15:35:15.209405] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.631 [2024-07-15 15:35:15.209689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.631 [2024-07-15 15:35:15.209709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.631 [2024-07-15 15:35:15.222624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.631 [2024-07-15 15:35:15.223164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.631 [2024-07-15 15:35:15.223185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.631 [2024-07-15 15:35:15.235153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.631 [2024-07-15 15:35:15.235542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.631 [2024-07-15 15:35:15.235562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.631 [2024-07-15 15:35:15.242792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.631 [2024-07-15 15:35:15.242857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.631 [2024-07-15 15:35:15.242875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.891 [2024-07-15 15:35:15.252864] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.891 [2024-07-15 15:35:15.253248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.891 [2024-07-15 15:35:15.253269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.891 [2024-07-15 15:35:15.261101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.891 [2024-07-15 15:35:15.261470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.891 [2024-07-15 15:35:15.261490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.891 [2024-07-15 15:35:15.270913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.891 [2024-07-15 15:35:15.271391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.891 [2024-07-15 15:35:15.271411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.891 [2024-07-15 15:35:15.280245] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.891 [2024-07-15 15:35:15.280313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.891 [2024-07-15 15:35:15.280334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.891 [2024-07-15 15:35:15.288908] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.891 [2024-07-15 15:35:15.289305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.891 [2024-07-15 15:35:15.289326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.891 [2024-07-15 15:35:15.298154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.891 [2024-07-15 15:35:15.298434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.891 [2024-07-15 15:35:15.298454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.891 [2024-07-15 15:35:15.308214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.891 [2024-07-15 15:35:15.308509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.891 [2024-07-15 15:35:15.308529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.891 [2024-07-15 15:35:15.317669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.891 [2024-07-15 15:35:15.318036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.891 [2024-07-15 15:35:15.318057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.891 [2024-07-15 15:35:15.327707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.328077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.328098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.335766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.335837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.335855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.345736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.346113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.346134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.354898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.355371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.355391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.364774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.365134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.365154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.374795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.375201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.375221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.385480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.385849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.385869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.395330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.395406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.395424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.406807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.407198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.407218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.418020] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.418398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.418418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.427515] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.427892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.427912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.437023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.437381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.437401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.446989] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.447367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.447388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.455568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.455953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.455973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.465685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.466063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.466083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.475236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.475617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.475638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.485070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.485435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.485455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.493392] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.493662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.493681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:05.892 [2024-07-15 15:35:15.503449] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:05.892 [2024-07-15 15:35:15.503835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.892 [2024-07-15 15:35:15.503855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.153 [2024-07-15 15:35:15.512484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.153 [2024-07-15 15:35:15.512847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.153 [2024-07-15 15:35:15.512867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.153 [2024-07-15 15:35:15.522498] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.153 [2024-07-15 15:35:15.522875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.153 [2024-07-15 15:35:15.522901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.153 [2024-07-15 15:35:15.531301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.153 [2024-07-15 15:35:15.531688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.153 [2024-07-15 15:35:15.531712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.153 [2024-07-15 15:35:15.540123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.153 [2024-07-15 15:35:15.540484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.153 [2024-07-15 15:35:15.540504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.153 [2024-07-15 15:35:15.549351] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.549744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.549764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.558428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.558796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.558816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.566956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.567349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.567370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.576814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.576889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.576907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.585473] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.585748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.585768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.590930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.591170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.591189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.596103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.596464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.596485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.602090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.602443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.602463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.608604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.608958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.608979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.615381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.615733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.615753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.620240] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.620590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.620610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.626386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.626626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.626646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.633415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.633760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.633779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.640516] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.640758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.640786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.646912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.647267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.647286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.652707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.652937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.652956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.659423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.659708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.659729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.669457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.669706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.669726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.679563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.679963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.679983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.691413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.691753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.691773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.702718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.703072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.703092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.714142] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.714520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.714540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.725250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.725653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.725672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.734001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.734342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.734362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.740154] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.740380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.740404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.746661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.746895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.746914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.752902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.753255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.753274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.759774] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.760107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.760127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.154 [2024-07-15 15:35:15.767347] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.154 [2024-07-15 15:35:15.767577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.154 [2024-07-15 15:35:15.767596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.465 [2024-07-15 15:35:15.774810] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.465 [2024-07-15 15:35:15.775227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.775249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.782923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.783274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.783294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.790924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.791212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.791232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.796251] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.796613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.796633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.801461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.801691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.801711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.808276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.808657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.808676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.812877] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.813169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.813189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.818480] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.818706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.818725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.823012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.823236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.823256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.829898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.830274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.830294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.837807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.838160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.838180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.846019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.846394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.846414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.854966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.855398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.855421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.863682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.864067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.864088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.872297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.872643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.872663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.881200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.881544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.881564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.889938] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.890366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.890386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.899484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.899826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.899846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.906820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.907197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.907217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.914183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.914410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.914430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.920142] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.920508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.920528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.926533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.926860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.926880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.932798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.933120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.933141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.938797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.939160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.939180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.944943] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.945306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.945326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.951199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.951427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.466 [2024-07-15 15:35:15.951446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.466 [2024-07-15 15:35:15.957441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.466 [2024-07-15 15:35:15.957786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:15.957806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:15.962747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:15.963105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:15.963125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:15.969691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:15.969922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:15.969949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:15.976635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:15.976864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:15.976888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:15.982965] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:15.983193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:15.983211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:15.988035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:15.988423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:15.988443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:15.992880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:15.993112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:15.993131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:15.996875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:15.997111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:15.997130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:16.001363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:16.001586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:16.001607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:16.008073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:16.008387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:16.008407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:16.015718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:16.015947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:16.015966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:16.022788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:16.023187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:16.023207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:16.032135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:16.032505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:16.032529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:16.039907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:16.040246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:16.040266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:16.049519] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:16.049873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:16.049898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:16.054818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:16.055042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:16.055069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:16.058629] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:16.059006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:16.059026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:16.063860] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:16.064266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:16.064285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:16.069315] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:16.069622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:16.069642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.467 [2024-07-15 15:35:16.073796] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.467 [2024-07-15 15:35:16.074013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.467 [2024-07-15 15:35:16.074032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.736 [2024-07-15 15:35:16.079171] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.736 [2024-07-15 15:35:16.079513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.736 [2024-07-15 15:35:16.079533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.736 [2024-07-15 15:35:16.084658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.736 [2024-07-15 15:35:16.085034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.736 [2024-07-15 15:35:16.085054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.736 [2024-07-15 15:35:16.090408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.736 [2024-07-15 15:35:16.090625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.736 [2024-07-15 15:35:16.090645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.736 [2024-07-15 15:35:16.097096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.736 [2024-07-15 15:35:16.097389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.736 [2024-07-15 15:35:16.097409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.736 [2024-07-15 15:35:16.103957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.736 [2024-07-15 15:35:16.104323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.736 [2024-07-15 15:35:16.104342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.736 [2024-07-15 15:35:16.110803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.736 [2024-07-15 15:35:16.111161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.736 [2024-07-15 15:35:16.111182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.736 [2024-07-15 15:35:16.116134] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.736 [2024-07-15 15:35:16.116411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.116431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.123621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.123954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.123974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.127790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.128010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.128029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.133289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.133627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.133646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.139395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.139612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.139631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.144166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.144520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.144539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.150791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.151175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.151195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.155695] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.155915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.155934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.160039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.160254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.160274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.164880] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.165102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.165121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.170746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.171110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.171130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.175487] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.175722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.175741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.179845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.180072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.180094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.184114] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.184357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.184376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.191222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.191506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.191527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.199651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.199901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.199921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.207335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.207550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.207570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.211271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.211486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.211505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.217335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.217583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.217609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.222381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.222601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.222621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.229175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.229548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.229568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.234385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.234603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.234623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.241801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.242023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.242042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.246969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.247184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.247203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.254552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.254902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.254922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.261600] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.261958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.261978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.268019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.268381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.268401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.272499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.272718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.272739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.276585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.276798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.737 [2024-07-15 15:35:16.276817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.737 [2024-07-15 15:35:16.283685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.737 [2024-07-15 15:35:16.283911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-15 15:35:16.283931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.738 [2024-07-15 15:35:16.288972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.738 [2024-07-15 15:35:16.289319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-15 15:35:16.289339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.738 [2024-07-15 15:35:16.296103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.738 [2024-07-15 15:35:16.296527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-15 15:35:16.296547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.738 [2024-07-15 15:35:16.302800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.738 [2024-07-15 15:35:16.303112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-15 15:35:16.303132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.738 [2024-07-15 15:35:16.309185] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.738 [2024-07-15 15:35:16.309400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-15 15:35:16.309419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.738 [2024-07-15 15:35:16.315105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.738 [2024-07-15 15:35:16.315467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-15 15:35:16.315487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.738 [2024-07-15 15:35:16.319226] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.738 [2024-07-15 15:35:16.319443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-15 15:35:16.319462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:06.738 [2024-07-15 15:35:16.324448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.738 [2024-07-15 15:35:16.324661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-15 15:35:16.324681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:06.738 [2024-07-15 15:35:16.331905] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.738 [2024-07-15 15:35:16.332326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-15 15:35:16.332347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:06.738 [2024-07-15 15:35:16.339797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.738 [2024-07-15 15:35:16.340016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-15 15:35:16.340039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:06.738 [2024-07-15 15:35:16.348319] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:06.738 [2024-07-15 15:35:16.348551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:06.738 [2024-07-15 15:35:16.348571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.357309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.357674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.357694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.366140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.366514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.366534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.374721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.374939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.374958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.383814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.384084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.384104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.392793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.393177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.393197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.401088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.401487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.401507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.409168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.409522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.409541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.418080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.418469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.418489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.425326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.425690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.425711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.432734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.432955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.432975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.440234] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.440445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.440464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.448040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.448528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.448549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.456954] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.457395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.457415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.464464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.464679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.464699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.472309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.472696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.472716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.481256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.481632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.481655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.488175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.488396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.488416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.497394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.497706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.497726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.505012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.505227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.505247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.512388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.512786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.512806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.521427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.521754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.521774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.527997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.528334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.528353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.001 [2024-07-15 15:35:16.532293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.001 [2024-07-15 15:35:16.532510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.001 [2024-07-15 15:35:16.532531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.536320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.536663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.536683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.542592] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.542812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.542832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.547418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.547632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.547652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.553653] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.553868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.553892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.557763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.557982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.558001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.561962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.562178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.562196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.566017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.566231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.566251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.570607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.570822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.570842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.575844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.576066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.576085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.579972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.580186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.580205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.585092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.585431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.585451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.589891] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.590104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.590123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.594525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.594741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.594760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.598800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.599127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.599147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.603611] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.603827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.603845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.607667] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.607879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.607903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.612730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.613090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.613110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.002 [2024-07-15 15:35:16.616978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.002 [2024-07-15 15:35:16.617194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.002 [2024-07-15 15:35:16.617213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.622553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.622766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.622789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.629195] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.629432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.629452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.635685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.636097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.636116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.645590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.646090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.646112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.654563] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.654820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.654839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.662550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.662791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.662811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.670090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.670470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.670491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.676533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.676767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.676787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.682862] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.683090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.683109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.688443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.688668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.688687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.694280] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.694629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.694649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.701130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.701521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.701542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.710399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.710723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.710743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.717625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.717840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.717859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.724408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.724774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.724794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.729982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.730328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.730347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:07.264 [2024-07-15 15:35:16.737013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b8fc90) with pdu=0x2000190fef90 00:30:07.264 [2024-07-15 15:35:16.737249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:07.264 [2024-07-15 15:35:16.737269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:07.264 00:30:07.264 Latency(us) 00:30:07.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.264 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:07.264 nvme0n1 : 2.00 4210.60 526.32 0.00 0.00 3792.36 1788.59 13489.49 00:30:07.264 =================================================================================================================== 00:30:07.264 Total : 4210.60 526.32 0.00 0.00 3792.36 1788.59 13489.49 00:30:07.264 0 00:30:07.264 15:35:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:07.264 15:35:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:07.264 15:35:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:07.264 15:35:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:07.264 | .driver_specific 00:30:07.264 | .nvme_error 00:30:07.264 | .status_code 00:30:07.264 | .command_transient_transport_error' 00:30:07.525 15:35:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 272 > 0 )) 00:30:07.525 15:35:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 887977 00:30:07.525 15:35:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 887977 ']' 00:30:07.525 15:35:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 887977 00:30:07.525 15:35:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:30:07.525 15:35:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:07.525 15:35:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 887977 00:30:07.525 15:35:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:07.525 15:35:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:07.525 15:35:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 887977' 00:30:07.525 killing process with pid 887977 00:30:07.525 15:35:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 887977 00:30:07.525 Received shutdown signal, test time was about 2.000000 seconds 00:30:07.525 00:30:07.525 Latency(us) 00:30:07.525 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.525 =================================================================================================================== 00:30:07.525 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:07.525 15:35:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 887977 00:30:07.526 15:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 885577 00:30:07.526 15:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 885577 ']' 00:30:07.526 15:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 885577 00:30:07.526 15:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:30:07.526 15:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:07.526 15:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 885577 00:30:07.786 15:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:07.786 15:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:07.787 15:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 885577' 00:30:07.787 killing process with pid 885577 00:30:07.787 15:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 885577 00:30:07.787 15:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 885577 00:30:07.787 00:30:07.787 real 0m16.197s 00:30:07.787 user 0m31.718s 00:30:07.787 sys 0m3.373s 00:30:07.787 15:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:07.787 15:35:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:07.787 ************************************ 00:30:07.787 END TEST nvmf_digest_error 00:30:07.787 ************************************ 00:30:07.787 15:35:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:30:07.787 15:35:17 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:07.787 15:35:17 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:07.787 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:07.787 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:30:07.787 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:07.787 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:30:07.787 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:07.787 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:07.787 rmmod nvme_tcp 00:30:07.787 rmmod nvme_fabrics 00:30:07.787 rmmod nvme_keyring 00:30:07.787 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:08.046 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:30:08.046 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:30:08.046 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 885577 ']' 00:30:08.046 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 885577 00:30:08.046 15:35:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 885577 ']' 00:30:08.046 15:35:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 885577 00:30:08.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (885577) - No such process 00:30:08.046 15:35:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 885577 is not found' 00:30:08.046 Process with pid 885577 is not found 00:30:08.046 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:08.046 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:08.046 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:08.046 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:08.046 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:08.046 15:35:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.046 15:35:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:08.046 15:35:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.958 15:35:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:09.958 00:30:09.958 real 0m42.577s 00:30:09.958 user 1m6.138s 00:30:09.958 sys 0m12.402s 00:30:09.958 15:35:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:09.958 15:35:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:09.958 ************************************ 00:30:09.958 END TEST nvmf_digest 00:30:09.958 ************************************ 00:30:09.958 15:35:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:09.958 15:35:19 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:30:09.958 15:35:19 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:30:09.958 15:35:19 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:30:09.958 15:35:19 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:09.958 15:35:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:09.958 15:35:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:09.958 15:35:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:09.958 ************************************ 00:30:09.958 START TEST nvmf_bdevperf 00:30:09.958 ************************************ 00:30:09.958 15:35:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:10.219 * Looking for test storage... 00:30:10.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.219 15:35:19 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:10.220 15:35:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:18.364 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:18.365 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:18.365 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:18.365 Found net devices under 0000:31:00.0: cvl_0_0 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:18.365 Found net devices under 0000:31:00.1: cvl_0_1 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:18.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:30:18.365 00:30:18.365 --- 10.0.0.2 ping statistics --- 00:30:18.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.365 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:30:18.365 00:30:18.365 --- 10.0.0.1 ping statistics --- 00:30:18.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.365 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=893214 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 893214 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 893214 ']' 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:18.365 15:35:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:18.365 [2024-07-15 15:35:27.438331] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:30:18.365 [2024-07-15 15:35:27.438423] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.365 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.365 [2024-07-15 15:35:27.518601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:18.365 [2024-07-15 15:35:27.591890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.366 [2024-07-15 15:35:27.591929] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.366 [2024-07-15 15:35:27.591937] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.366 [2024-07-15 15:35:27.591943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.366 [2024-07-15 15:35:27.591949] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.366 [2024-07-15 15:35:27.592087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:18.366 [2024-07-15 15:35:27.592332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:18.366 [2024-07-15 15:35:27.592332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.658 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:18.658 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:30:18.658 15:35:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:18.658 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.658 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:18.918 [2024-07-15 15:35:28.291847] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:18.918 Malloc0 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:18.918 [2024-07-15 15:35:28.357251] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:18.918 { 00:30:18.918 "params": { 00:30:18.918 "name": "Nvme$subsystem", 00:30:18.918 "trtype": "$TEST_TRANSPORT", 00:30:18.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:18.918 "adrfam": "ipv4", 00:30:18.918 "trsvcid": "$NVMF_PORT", 00:30:18.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:18.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:18.918 "hdgst": ${hdgst:-false}, 00:30:18.918 "ddgst": ${ddgst:-false} 00:30:18.918 }, 00:30:18.918 "method": "bdev_nvme_attach_controller" 00:30:18.918 } 00:30:18.918 EOF 00:30:18.918 )") 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:18.918 15:35:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:18.918 "params": { 00:30:18.918 "name": "Nvme1", 00:30:18.918 "trtype": "tcp", 00:30:18.918 "traddr": "10.0.0.2", 00:30:18.918 "adrfam": "ipv4", 00:30:18.918 "trsvcid": "4420", 00:30:18.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:18.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:18.918 "hdgst": false, 00:30:18.918 "ddgst": false 00:30:18.918 }, 00:30:18.918 "method": "bdev_nvme_attach_controller" 00:30:18.918 }' 00:30:18.918 [2024-07-15 15:35:28.410851] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:30:18.919 [2024-07-15 15:35:28.410910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893419 ] 00:30:18.919 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.919 [2024-07-15 15:35:28.474036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.179 [2024-07-15 15:35:28.538494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.179 Running I/O for 1 seconds... 00:30:20.117 00:30:20.117 Latency(us) 00:30:20.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.117 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:20.117 Verification LBA range: start 0x0 length 0x4000 00:30:20.117 Nvme1n1 : 1.01 8990.70 35.12 0.00 0.00 14175.59 2785.28 14199.47 00:30:20.117 =================================================================================================================== 00:30:20.117 Total : 8990.70 35.12 0.00 0.00 14175.59 2785.28 14199.47 00:30:20.377 15:35:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=893616 00:30:20.377 15:35:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:20.377 15:35:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:20.377 15:35:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:20.377 15:35:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:30:20.377 15:35:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:30:20.377 15:35:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:20.377 15:35:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:20.377 { 00:30:20.377 "params": { 00:30:20.377 "name": "Nvme$subsystem", 00:30:20.377 "trtype": "$TEST_TRANSPORT", 00:30:20.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:20.377 "adrfam": "ipv4", 00:30:20.377 "trsvcid": "$NVMF_PORT", 00:30:20.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:20.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:20.377 "hdgst": ${hdgst:-false}, 00:30:20.377 "ddgst": ${ddgst:-false} 00:30:20.377 }, 00:30:20.377 "method": "bdev_nvme_attach_controller" 00:30:20.377 } 00:30:20.377 EOF 00:30:20.377 )") 00:30:20.377 15:35:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:30:20.377 15:35:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:30:20.377 15:35:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:30:20.377 15:35:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:20.377 "params": { 00:30:20.377 "name": "Nvme1", 00:30:20.377 "trtype": "tcp", 00:30:20.377 "traddr": "10.0.0.2", 00:30:20.377 "adrfam": "ipv4", 00:30:20.377 "trsvcid": "4420", 00:30:20.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:20.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:20.377 "hdgst": false, 00:30:20.377 "ddgst": false 00:30:20.377 }, 00:30:20.377 "method": "bdev_nvme_attach_controller" 00:30:20.377 }' 00:30:20.377 [2024-07-15 15:35:29.875288] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:30:20.377 [2024-07-15 15:35:29.875345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893616 ] 00:30:20.377 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.377 [2024-07-15 15:35:29.937163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.637 [2024-07-15 15:35:30.000887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.637 Running I/O for 15 seconds... 00:30:23.958 15:35:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 893214 00:30:23.958 15:35:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:23.958 [2024-07-15 15:35:32.840561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.840979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.840990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.841003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.841013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.841024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.841035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.841046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.841056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.841070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.958 [2024-07-15 15:35:32.841080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.958 [2024-07-15 15:35:32.841090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.841992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.959 [2024-07-15 15:35:32.841999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.959 [2024-07-15 15:35:32.842340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.959 [2024-07-15 15:35:32.842349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.960 [2024-07-15 15:35:32.842356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.960 [2024-07-15 15:35:32.842371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.960 [2024-07-15 15:35:32.842388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.960 [2024-07-15 15:35:32.842404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.960 [2024-07-15 15:35:32.842420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.960 [2024-07-15 15:35:32.842436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.960 [2024-07-15 15:35:32.842452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.960 [2024-07-15 15:35:32.842468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:23.960 [2024-07-15 15:35:32.842484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:23.960 [2024-07-15 15:35:32.842860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270750 is same with the state(5) to be set 00:30:23.960 [2024-07-15 15:35:32.842877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:23.960 [2024-07-15 15:35:32.842882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:23.960 [2024-07-15 15:35:32.842892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60368 len:8 PRP1 0x0 PRP2 0x0 00:30:23.960 [2024-07-15 15:35:32.842900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:23.960 [2024-07-15 15:35:32.842938] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2270750 was disconnected and freed. reset controller. 00:30:23.960 [2024-07-15 15:35:32.846509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.960 [2024-07-15 15:35:32.846557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.960 [2024-07-15 15:35:32.847428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-07-15 15:35:32.847463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.960 [2024-07-15 15:35:32.847474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.960 [2024-07-15 15:35:32.847711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.960 [2024-07-15 15:35:32.847944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.960 [2024-07-15 15:35:32.847954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.960 [2024-07-15 15:35:32.847962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.960 [2024-07-15 15:35:32.851463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.960 [2024-07-15 15:35:32.860539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.960 [2024-07-15 15:35:32.861218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-07-15 15:35:32.861255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.960 [2024-07-15 15:35:32.861265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.960 [2024-07-15 15:35:32.861501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.960 [2024-07-15 15:35:32.861721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.960 [2024-07-15 15:35:32.861729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.960 [2024-07-15 15:35:32.861737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.960 [2024-07-15 15:35:32.865244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.960 [2024-07-15 15:35:32.874300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.960 [2024-07-15 15:35:32.874969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-07-15 15:35:32.875005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.960 [2024-07-15 15:35:32.875017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.960 [2024-07-15 15:35:32.875257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.960 [2024-07-15 15:35:32.875476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.960 [2024-07-15 15:35:32.875485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.960 [2024-07-15 15:35:32.875492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.960 [2024-07-15 15:35:32.878995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.960 [2024-07-15 15:35:32.888064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.960 [2024-07-15 15:35:32.888674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-07-15 15:35:32.888710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.960 [2024-07-15 15:35:32.888720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.960 [2024-07-15 15:35:32.888964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.960 [2024-07-15 15:35:32.889184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.960 [2024-07-15 15:35:32.889193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.960 [2024-07-15 15:35:32.889200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.960 [2024-07-15 15:35:32.892699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.960 [2024-07-15 15:35:32.901963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.960 [2024-07-15 15:35:32.902585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-07-15 15:35:32.902621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.960 [2024-07-15 15:35:32.902632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.960 [2024-07-15 15:35:32.902868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.960 [2024-07-15 15:35:32.903096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.960 [2024-07-15 15:35:32.903105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.960 [2024-07-15 15:35:32.903112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.960 [2024-07-15 15:35:32.906606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.960 [2024-07-15 15:35:32.915866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.960 [2024-07-15 15:35:32.916523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-07-15 15:35:32.916560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.960 [2024-07-15 15:35:32.916570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.960 [2024-07-15 15:35:32.916806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.960 [2024-07-15 15:35:32.917033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.960 [2024-07-15 15:35:32.917043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.960 [2024-07-15 15:35:32.917050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.960 [2024-07-15 15:35:32.920545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.960 [2024-07-15 15:35:32.929603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.960 [2024-07-15 15:35:32.930272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-07-15 15:35:32.930309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.960 [2024-07-15 15:35:32.930320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.960 [2024-07-15 15:35:32.930556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.960 [2024-07-15 15:35:32.930775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.960 [2024-07-15 15:35:32.930783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.960 [2024-07-15 15:35:32.930790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.960 [2024-07-15 15:35:32.934295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.960 [2024-07-15 15:35:32.943363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.960 [2024-07-15 15:35:32.944005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-07-15 15:35:32.944042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.960 [2024-07-15 15:35:32.944057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.960 [2024-07-15 15:35:32.944294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.960 [2024-07-15 15:35:32.944513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.960 [2024-07-15 15:35:32.944522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.960 [2024-07-15 15:35:32.944529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.960 [2024-07-15 15:35:32.948033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.960 [2024-07-15 15:35:32.957094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.960 [2024-07-15 15:35:32.957731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-07-15 15:35:32.957767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.960 [2024-07-15 15:35:32.957777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.960 [2024-07-15 15:35:32.958022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.960 [2024-07-15 15:35:32.958242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.960 [2024-07-15 15:35:32.958250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.960 [2024-07-15 15:35:32.958257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.960 [2024-07-15 15:35:32.961754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.960 [2024-07-15 15:35:32.971025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.960 [2024-07-15 15:35:32.971661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-07-15 15:35:32.971697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.960 [2024-07-15 15:35:32.971707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:32.971951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.961 [2024-07-15 15:35:32.972172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.961 [2024-07-15 15:35:32.972180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.961 [2024-07-15 15:35:32.972187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.961 [2024-07-15 15:35:32.975681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.961 [2024-07-15 15:35:32.984955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.961 [2024-07-15 15:35:32.985627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-07-15 15:35:32.985664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.961 [2024-07-15 15:35:32.985674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:32.985918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.961 [2024-07-15 15:35:32.986142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.961 [2024-07-15 15:35:32.986151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.961 [2024-07-15 15:35:32.986158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.961 [2024-07-15 15:35:32.989651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.961 [2024-07-15 15:35:32.998708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.961 [2024-07-15 15:35:32.999262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-07-15 15:35:32.999280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.961 [2024-07-15 15:35:32.999287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:32.999504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.961 [2024-07-15 15:35:32.999719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.961 [2024-07-15 15:35:32.999727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.961 [2024-07-15 15:35:32.999734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.961 [2024-07-15 15:35:33.003260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.961 [2024-07-15 15:35:33.012523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.961 [2024-07-15 15:35:33.013204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-07-15 15:35:33.013240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.961 [2024-07-15 15:35:33.013250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:33.013486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.961 [2024-07-15 15:35:33.013705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.961 [2024-07-15 15:35:33.013713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.961 [2024-07-15 15:35:33.013721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.961 [2024-07-15 15:35:33.017225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.961 [2024-07-15 15:35:33.026285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.961 [2024-07-15 15:35:33.026964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-07-15 15:35:33.027001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.961 [2024-07-15 15:35:33.027013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:33.027250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.961 [2024-07-15 15:35:33.027469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.961 [2024-07-15 15:35:33.027477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.961 [2024-07-15 15:35:33.027485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.961 [2024-07-15 15:35:33.030990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.961 [2024-07-15 15:35:33.040052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.961 [2024-07-15 15:35:33.040673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-07-15 15:35:33.040710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.961 [2024-07-15 15:35:33.040720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:33.040965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.961 [2024-07-15 15:35:33.041185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.961 [2024-07-15 15:35:33.041193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.961 [2024-07-15 15:35:33.041200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.961 [2024-07-15 15:35:33.044692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.961 [2024-07-15 15:35:33.053956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.961 [2024-07-15 15:35:33.054607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-07-15 15:35:33.054644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.961 [2024-07-15 15:35:33.054655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:33.054899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.961 [2024-07-15 15:35:33.055120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.961 [2024-07-15 15:35:33.055129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.961 [2024-07-15 15:35:33.055136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.961 [2024-07-15 15:35:33.058634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.961 [2024-07-15 15:35:33.067697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.961 [2024-07-15 15:35:33.068376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-07-15 15:35:33.068412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.961 [2024-07-15 15:35:33.068422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:33.068659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.961 [2024-07-15 15:35:33.068878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.961 [2024-07-15 15:35:33.068894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.961 [2024-07-15 15:35:33.068902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.961 [2024-07-15 15:35:33.072402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.961 [2024-07-15 15:35:33.081471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.961 [2024-07-15 15:35:33.082171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-07-15 15:35:33.082208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.961 [2024-07-15 15:35:33.082222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:33.082459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.961 [2024-07-15 15:35:33.082678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.961 [2024-07-15 15:35:33.082686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.961 [2024-07-15 15:35:33.082693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.961 [2024-07-15 15:35:33.086198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.961 [2024-07-15 15:35:33.095253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.961 [2024-07-15 15:35:33.095924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-07-15 15:35:33.095961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.961 [2024-07-15 15:35:33.095973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:33.096213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.961 [2024-07-15 15:35:33.096432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.961 [2024-07-15 15:35:33.096440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.961 [2024-07-15 15:35:33.096447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.961 [2024-07-15 15:35:33.099948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.961 [2024-07-15 15:35:33.109016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.961 [2024-07-15 15:35:33.109658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-07-15 15:35:33.109694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.961 [2024-07-15 15:35:33.109704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:33.109948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.961 [2024-07-15 15:35:33.110168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.961 [2024-07-15 15:35:33.110177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.961 [2024-07-15 15:35:33.110184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.961 [2024-07-15 15:35:33.113679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.961 [2024-07-15 15:35:33.122949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.961 [2024-07-15 15:35:33.123495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-07-15 15:35:33.123513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.961 [2024-07-15 15:35:33.123521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:33.123737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.961 [2024-07-15 15:35:33.123958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.961 [2024-07-15 15:35:33.123972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.961 [2024-07-15 15:35:33.123979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.961 [2024-07-15 15:35:33.127468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.961 [2024-07-15 15:35:33.136944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.961 [2024-07-15 15:35:33.137484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-07-15 15:35:33.137501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.961 [2024-07-15 15:35:33.137508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:33.137724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.961 [2024-07-15 15:35:33.137945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.961 [2024-07-15 15:35:33.137953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.961 [2024-07-15 15:35:33.137959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.961 [2024-07-15 15:35:33.141450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.961 [2024-07-15 15:35:33.150708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.961 [2024-07-15 15:35:33.151388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-07-15 15:35:33.151424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.961 [2024-07-15 15:35:33.151435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:33.151671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.961 [2024-07-15 15:35:33.151901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.961 [2024-07-15 15:35:33.151910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.961 [2024-07-15 15:35:33.151917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.961 [2024-07-15 15:35:33.155411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.961 [2024-07-15 15:35:33.164471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.961 [2024-07-15 15:35:33.165184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-07-15 15:35:33.165221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.961 [2024-07-15 15:35:33.165233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:33.165472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.961 [2024-07-15 15:35:33.165691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.961 [2024-07-15 15:35:33.165699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.961 [2024-07-15 15:35:33.165706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.961 [2024-07-15 15:35:33.169207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.961 [2024-07-15 15:35:33.178271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.961 [2024-07-15 15:35:33.178903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-07-15 15:35:33.178939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.961 [2024-07-15 15:35:33.178950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:33.179185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.961 [2024-07-15 15:35:33.179404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.961 [2024-07-15 15:35:33.179412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.961 [2024-07-15 15:35:33.179420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.961 [2024-07-15 15:35:33.182930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.961 [2024-07-15 15:35:33.192136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.961 [2024-07-15 15:35:33.192813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-07-15 15:35:33.192849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.961 [2024-07-15 15:35:33.192859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.961 [2024-07-15 15:35:33.193103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.193323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.193331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.193338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.196835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.205900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.206454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.206472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.206480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.206696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.206917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.206926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.206933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.210451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.219722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.220262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.220298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.220310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.220554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.220774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.220782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.220789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.224294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.233564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.234639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.234669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.234679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.234922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.235143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.235151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.235158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.238656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.247306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.247873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.247895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.247903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.248121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.248337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.248345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.248352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.251843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.261110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.261563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.261578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.261586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.261802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.262024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.262034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.262046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.265535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.275036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.275587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.275602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.275609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.275825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.276047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.276055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.276062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.279550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.288824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.289495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.289531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.289542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.289778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.290004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.290013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.290020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.293513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.302581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.303158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.303178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.303185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.303401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.303617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.303625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.303631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.307126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.316392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.316941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.316961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.316969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.317185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.317400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.317408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.317415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.320907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.330170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.330737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.330751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.330758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.330980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.331195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.331204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.331210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.334699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.343971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.344610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.344647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.344657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.344902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.345122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.345130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.345137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.348630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.357903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.358367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.358386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.358393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.358610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.358830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.358838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.358844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.362341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.371810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.372470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.372506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.372516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.372752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.372979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.372988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.372996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.376490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.385575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.386148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.386167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.386175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.386391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.386606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.386614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.386621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.390114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.399377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.400108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.400144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.400155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.400391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.400610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.400618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.400625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.404133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.413195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.413781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.413799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.413807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.414028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.414244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.414252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.414259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.417778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.427050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.427492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.427508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.427515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.427731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.427951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.427960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.427966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.431455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.962 [2024-07-15 15:35:33.440978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.962 [2024-07-15 15:35:33.441507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-07-15 15:35:33.441523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.962 [2024-07-15 15:35:33.441530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.962 [2024-07-15 15:35:33.441745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.962 [2024-07-15 15:35:33.441966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.962 [2024-07-15 15:35:33.441975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.962 [2024-07-15 15:35:33.441982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.962 [2024-07-15 15:35:33.445472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.963 [2024-07-15 15:35:33.454734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.963 [2024-07-15 15:35:33.455270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-07-15 15:35:33.455285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.963 [2024-07-15 15:35:33.455296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.963 [2024-07-15 15:35:33.455511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.963 [2024-07-15 15:35:33.455727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.963 [2024-07-15 15:35:33.455735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.963 [2024-07-15 15:35:33.455741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.963 [2024-07-15 15:35:33.459233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.963 [2024-07-15 15:35:33.468496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.963 [2024-07-15 15:35:33.469038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-07-15 15:35:33.469054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.963 [2024-07-15 15:35:33.469061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.963 [2024-07-15 15:35:33.469276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.963 [2024-07-15 15:35:33.469491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.963 [2024-07-15 15:35:33.469499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.963 [2024-07-15 15:35:33.469505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.963 [2024-07-15 15:35:33.472998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.963 [2024-07-15 15:35:33.482271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.963 [2024-07-15 15:35:33.482947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-07-15 15:35:33.482984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.963 [2024-07-15 15:35:33.482995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.963 [2024-07-15 15:35:33.483234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.963 [2024-07-15 15:35:33.483452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.963 [2024-07-15 15:35:33.483461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.963 [2024-07-15 15:35:33.483468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.963 [2024-07-15 15:35:33.486973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.963 [2024-07-15 15:35:33.496037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.963 [2024-07-15 15:35:33.496627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-07-15 15:35:33.496646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.963 [2024-07-15 15:35:33.496654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.963 [2024-07-15 15:35:33.496870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.963 [2024-07-15 15:35:33.497093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.963 [2024-07-15 15:35:33.497106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.963 [2024-07-15 15:35:33.497114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.963 [2024-07-15 15:35:33.500603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.963 [2024-07-15 15:35:33.509908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.963 [2024-07-15 15:35:33.510407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-07-15 15:35:33.510442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.963 [2024-07-15 15:35:33.510453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.963 [2024-07-15 15:35:33.510689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.963 [2024-07-15 15:35:33.510916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.963 [2024-07-15 15:35:33.510925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.963 [2024-07-15 15:35:33.510933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.963 [2024-07-15 15:35:33.514428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.963 [2024-07-15 15:35:33.523693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.963 [2024-07-15 15:35:33.524384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-07-15 15:35:33.524421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.963 [2024-07-15 15:35:33.524431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.963 [2024-07-15 15:35:33.524667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.963 [2024-07-15 15:35:33.524894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.963 [2024-07-15 15:35:33.524903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.963 [2024-07-15 15:35:33.524910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.963 [2024-07-15 15:35:33.528405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.963 [2024-07-15 15:35:33.537470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.963 [2024-07-15 15:35:33.537964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-07-15 15:35:33.537983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.963 [2024-07-15 15:35:33.537990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.963 [2024-07-15 15:35:33.538207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.963 [2024-07-15 15:35:33.538422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.963 [2024-07-15 15:35:33.538430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.963 [2024-07-15 15:35:33.538437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.963 [2024-07-15 15:35:33.541930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.963 [2024-07-15 15:35:33.551403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.963 [2024-07-15 15:35:33.552097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-07-15 15:35:33.552133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.963 [2024-07-15 15:35:33.552144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.963 [2024-07-15 15:35:33.552379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.963 [2024-07-15 15:35:33.552598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.963 [2024-07-15 15:35:33.552607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.963 [2024-07-15 15:35:33.552614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.963 [2024-07-15 15:35:33.556113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:23.963 [2024-07-15 15:35:33.565177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:23.963 [2024-07-15 15:35:33.565854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-07-15 15:35:33.565897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:23.963 [2024-07-15 15:35:33.565910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:23.963 [2024-07-15 15:35:33.566149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:23.963 [2024-07-15 15:35:33.566368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:23.963 [2024-07-15 15:35:33.566377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:23.963 [2024-07-15 15:35:33.566384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:23.963 [2024-07-15 15:35:33.569878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.225 [2024-07-15 15:35:33.578940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.225 [2024-07-15 15:35:33.579467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.225 [2024-07-15 15:35:33.579485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.225 [2024-07-15 15:35:33.579493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.225 [2024-07-15 15:35:33.579709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.225 [2024-07-15 15:35:33.579930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.225 [2024-07-15 15:35:33.579940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.225 [2024-07-15 15:35:33.579947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.225 [2024-07-15 15:35:33.583448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.225 [2024-07-15 15:35:33.592711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.225 [2024-07-15 15:35:33.593271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.225 [2024-07-15 15:35:33.593286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.225 [2024-07-15 15:35:33.593298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.225 [2024-07-15 15:35:33.593513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.225 [2024-07-15 15:35:33.593728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.225 [2024-07-15 15:35:33.593736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.225 [2024-07-15 15:35:33.593742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.225 [2024-07-15 15:35:33.597237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.225 [2024-07-15 15:35:33.606496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.225 [2024-07-15 15:35:33.607037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.225 [2024-07-15 15:35:33.607053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.225 [2024-07-15 15:35:33.607060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.225 [2024-07-15 15:35:33.607276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.225 [2024-07-15 15:35:33.607492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.225 [2024-07-15 15:35:33.607500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.225 [2024-07-15 15:35:33.607507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.225 [2024-07-15 15:35:33.611000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.225 [2024-07-15 15:35:33.620266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.225 [2024-07-15 15:35:33.620800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.225 [2024-07-15 15:35:33.620814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.225 [2024-07-15 15:35:33.620821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.225 [2024-07-15 15:35:33.621041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.225 [2024-07-15 15:35:33.621256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.225 [2024-07-15 15:35:33.621264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.225 [2024-07-15 15:35:33.621271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.225 [2024-07-15 15:35:33.624784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.225 [2024-07-15 15:35:33.634061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.225 [2024-07-15 15:35:33.634606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.225 [2024-07-15 15:35:33.634623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.225 [2024-07-15 15:35:33.634630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.225 [2024-07-15 15:35:33.634845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.225 [2024-07-15 15:35:33.635066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.225 [2024-07-15 15:35:33.635079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.225 [2024-07-15 15:35:33.635086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.225 [2024-07-15 15:35:33.638576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.225 [2024-07-15 15:35:33.647836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.225 [2024-07-15 15:35:33.648415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.225 [2024-07-15 15:35:33.648430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.225 [2024-07-15 15:35:33.648437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.225 [2024-07-15 15:35:33.648652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.225 [2024-07-15 15:35:33.648867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.225 [2024-07-15 15:35:33.648875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.225 [2024-07-15 15:35:33.648882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.225 [2024-07-15 15:35:33.652376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.225 [2024-07-15 15:35:33.661641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.225 [2024-07-15 15:35:33.663031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.225 [2024-07-15 15:35:33.663063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.225 [2024-07-15 15:35:33.663073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.225 [2024-07-15 15:35:33.663309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.225 [2024-07-15 15:35:33.663529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.225 [2024-07-15 15:35:33.663537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.225 [2024-07-15 15:35:33.663544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.225 [2024-07-15 15:35:33.667046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.225 [2024-07-15 15:35:33.675486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.225 [2024-07-15 15:35:33.676040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.225 [2024-07-15 15:35:33.676058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.225 [2024-07-15 15:35:33.676066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.225 [2024-07-15 15:35:33.676282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.225 [2024-07-15 15:35:33.676498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.225 [2024-07-15 15:35:33.676506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.225 [2024-07-15 15:35:33.676513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.225 [2024-07-15 15:35:33.680023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.225 [2024-07-15 15:35:33.689302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.225 [2024-07-15 15:35:33.689770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.225 [2024-07-15 15:35:33.689787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.225 [2024-07-15 15:35:33.689794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.225 [2024-07-15 15:35:33.690015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.225 [2024-07-15 15:35:33.690233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.225 [2024-07-15 15:35:33.690241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.225 [2024-07-15 15:35:33.690248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.225 [2024-07-15 15:35:33.693738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.225 [2024-07-15 15:35:33.703208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.225 [2024-07-15 15:35:33.703780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.225 [2024-07-15 15:35:33.703795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.225 [2024-07-15 15:35:33.703802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.225 [2024-07-15 15:35:33.704023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.225 [2024-07-15 15:35:33.704239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.225 [2024-07-15 15:35:33.704247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.225 [2024-07-15 15:35:33.704254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.225 [2024-07-15 15:35:33.707739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.225 [2024-07-15 15:35:33.717000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.225 [2024-07-15 15:35:33.717455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.225 [2024-07-15 15:35:33.717470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.225 [2024-07-15 15:35:33.717477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.225 [2024-07-15 15:35:33.717693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.225 [2024-07-15 15:35:33.717912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.225 [2024-07-15 15:35:33.717921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.225 [2024-07-15 15:35:33.717927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.225 [2024-07-15 15:35:33.721415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.225 [2024-07-15 15:35:33.730881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.225 [2024-07-15 15:35:33.731404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.225 [2024-07-15 15:35:33.731418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.225 [2024-07-15 15:35:33.731425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.225 [2024-07-15 15:35:33.731644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.225 [2024-07-15 15:35:33.731859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.225 [2024-07-15 15:35:33.731867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.225 [2024-07-15 15:35:33.731874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.225 [2024-07-15 15:35:33.735366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.225 [2024-07-15 15:35:33.744630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.225 [2024-07-15 15:35:33.745107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.225 [2024-07-15 15:35:33.745122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.225 [2024-07-15 15:35:33.745129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.225 [2024-07-15 15:35:33.745345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.225 [2024-07-15 15:35:33.745560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.225 [2024-07-15 15:35:33.745567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.225 [2024-07-15 15:35:33.745574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.225 [2024-07-15 15:35:33.749065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.225 [2024-07-15 15:35:33.758531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.225 [2024-07-15 15:35:33.759204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.225 [2024-07-15 15:35:33.759241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.225 [2024-07-15 15:35:33.759253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.225 [2024-07-15 15:35:33.759492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.225 [2024-07-15 15:35:33.759712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.226 [2024-07-15 15:35:33.759720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.226 [2024-07-15 15:35:33.759727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.226 [2024-07-15 15:35:33.763230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.226 [2024-07-15 15:35:33.772292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.226 [2024-07-15 15:35:33.772881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.226 [2024-07-15 15:35:33.772904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.226 [2024-07-15 15:35:33.772911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.226 [2024-07-15 15:35:33.773127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.226 [2024-07-15 15:35:33.773343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.226 [2024-07-15 15:35:33.773352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.226 [2024-07-15 15:35:33.773362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.226 [2024-07-15 15:35:33.776852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.226 [2024-07-15 15:35:33.786128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.226 [2024-07-15 15:35:33.786704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.226 [2024-07-15 15:35:33.786720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.226 [2024-07-15 15:35:33.786727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.226 [2024-07-15 15:35:33.786948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.226 [2024-07-15 15:35:33.787164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.226 [2024-07-15 15:35:33.787172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.226 [2024-07-15 15:35:33.787179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.226 [2024-07-15 15:35:33.790665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.226 [2024-07-15 15:35:33.799939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.226 [2024-07-15 15:35:33.800584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.226 [2024-07-15 15:35:33.800620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.226 [2024-07-15 15:35:33.800630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.226 [2024-07-15 15:35:33.800866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.226 [2024-07-15 15:35:33.801094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.226 [2024-07-15 15:35:33.801103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.226 [2024-07-15 15:35:33.801110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.226 [2024-07-15 15:35:33.804606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.226 [2024-07-15 15:35:33.813670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.226 [2024-07-15 15:35:33.814269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.226 [2024-07-15 15:35:33.814288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.226 [2024-07-15 15:35:33.814295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.226 [2024-07-15 15:35:33.814512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.226 [2024-07-15 15:35:33.814727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.226 [2024-07-15 15:35:33.814735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.226 [2024-07-15 15:35:33.814742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.226 [2024-07-15 15:35:33.818235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.226 [2024-07-15 15:35:33.827498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.226 [2024-07-15 15:35:33.828023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.226 [2024-07-15 15:35:33.828044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.226 [2024-07-15 15:35:33.828051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.226 [2024-07-15 15:35:33.828267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.226 [2024-07-15 15:35:33.828482] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.226 [2024-07-15 15:35:33.828490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.226 [2024-07-15 15:35:33.828497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.226 [2024-07-15 15:35:33.832016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.226 [2024-07-15 15:35:33.841290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.226 [2024-07-15 15:35:33.841817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.226 [2024-07-15 15:35:33.841833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.226 [2024-07-15 15:35:33.841840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.226 [2024-07-15 15:35:33.842061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.226 [2024-07-15 15:35:33.842277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.226 [2024-07-15 15:35:33.842285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.226 [2024-07-15 15:35:33.842292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.494 [2024-07-15 15:35:33.845785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.494 [2024-07-15 15:35:33.855063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.494 [2024-07-15 15:35:33.855594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.494 [2024-07-15 15:35:33.855631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.494 [2024-07-15 15:35:33.855643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.494 [2024-07-15 15:35:33.855880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.494 [2024-07-15 15:35:33.856108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.494 [2024-07-15 15:35:33.856117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.494 [2024-07-15 15:35:33.856124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.494 [2024-07-15 15:35:33.859624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.494 [2024-07-15 15:35:33.868911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.494 [2024-07-15 15:35:33.869374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.494 [2024-07-15 15:35:33.869392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.494 [2024-07-15 15:35:33.869400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.494 [2024-07-15 15:35:33.869616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.494 [2024-07-15 15:35:33.869837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.494 [2024-07-15 15:35:33.869844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.494 [2024-07-15 15:35:33.869851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.494 [2024-07-15 15:35:33.873428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.494 [2024-07-15 15:35:33.882722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.494 [2024-07-15 15:35:33.883222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.494 [2024-07-15 15:35:33.883239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.494 [2024-07-15 15:35:33.883246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.494 [2024-07-15 15:35:33.883462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.494 [2024-07-15 15:35:33.883677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.494 [2024-07-15 15:35:33.883685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.494 [2024-07-15 15:35:33.883692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.494 [2024-07-15 15:35:33.887192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.494 [2024-07-15 15:35:33.896460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.494 [2024-07-15 15:35:33.896998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.494 [2024-07-15 15:35:33.897015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.494 [2024-07-15 15:35:33.897022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.494 [2024-07-15 15:35:33.897238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.494 [2024-07-15 15:35:33.897453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.494 [2024-07-15 15:35:33.897460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.494 [2024-07-15 15:35:33.897467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.494 [2024-07-15 15:35:33.900961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.494 [2024-07-15 15:35:33.910227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.494 [2024-07-15 15:35:33.910893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.494 [2024-07-15 15:35:33.910929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.494 [2024-07-15 15:35:33.910940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.494 [2024-07-15 15:35:33.911180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.494 [2024-07-15 15:35:33.911398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.494 [2024-07-15 15:35:33.911407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.494 [2024-07-15 15:35:33.911414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.494 [2024-07-15 15:35:33.914919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.494 [2024-07-15 15:35:33.923984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.494 [2024-07-15 15:35:33.924639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.494 [2024-07-15 15:35:33.924675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.494 [2024-07-15 15:35:33.924686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.494 [2024-07-15 15:35:33.924932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.494 [2024-07-15 15:35:33.925152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.494 [2024-07-15 15:35:33.925160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.494 [2024-07-15 15:35:33.925167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.494 [2024-07-15 15:35:33.928660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.494 [2024-07-15 15:35:33.937713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.494 [2024-07-15 15:35:33.938395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.494 [2024-07-15 15:35:33.938432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.494 [2024-07-15 15:35:33.938442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.494 [2024-07-15 15:35:33.938678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.494 [2024-07-15 15:35:33.938906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.494 [2024-07-15 15:35:33.938915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.494 [2024-07-15 15:35:33.938922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.494 [2024-07-15 15:35:33.942414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.494 [2024-07-15 15:35:33.951466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.494 [2024-07-15 15:35:33.952156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.494 [2024-07-15 15:35:33.952193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.494 [2024-07-15 15:35:33.952203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.494 [2024-07-15 15:35:33.952439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.494 [2024-07-15 15:35:33.952658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.494 [2024-07-15 15:35:33.952667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.494 [2024-07-15 15:35:33.952674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.494 [2024-07-15 15:35:33.956173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.494 [2024-07-15 15:35:33.965229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.494 [2024-07-15 15:35:33.965902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.494 [2024-07-15 15:35:33.965938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.494 [2024-07-15 15:35:33.965952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.494 [2024-07-15 15:35:33.966188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.494 [2024-07-15 15:35:33.966407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.495 [2024-07-15 15:35:33.966415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.495 [2024-07-15 15:35:33.966423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.495 [2024-07-15 15:35:33.969921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.495 [2024-07-15 15:35:33.978973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.495 [2024-07-15 15:35:33.979605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.495 [2024-07-15 15:35:33.979642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.495 [2024-07-15 15:35:33.979652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.495 [2024-07-15 15:35:33.979897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.495 [2024-07-15 15:35:33.980117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.495 [2024-07-15 15:35:33.980125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.495 [2024-07-15 15:35:33.980132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.495 [2024-07-15 15:35:33.983632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.495 [2024-07-15 15:35:33.992895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.495 [2024-07-15 15:35:33.993507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.495 [2024-07-15 15:35:33.993544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.495 [2024-07-15 15:35:33.993554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.495 [2024-07-15 15:35:33.993790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.495 [2024-07-15 15:35:33.994019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.495 [2024-07-15 15:35:33.994028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.495 [2024-07-15 15:35:33.994035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.495 [2024-07-15 15:35:33.997529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.495 [2024-07-15 15:35:34.006792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.495 [2024-07-15 15:35:34.007435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.495 [2024-07-15 15:35:34.007472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.495 [2024-07-15 15:35:34.007482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.495 [2024-07-15 15:35:34.007718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.495 [2024-07-15 15:35:34.007946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.495 [2024-07-15 15:35:34.007959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.495 [2024-07-15 15:35:34.007967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.495 [2024-07-15 15:35:34.011463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.495 [2024-07-15 15:35:34.020725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.495 [2024-07-15 15:35:34.021293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.495 [2024-07-15 15:35:34.021310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.495 [2024-07-15 15:35:34.021318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.495 [2024-07-15 15:35:34.021534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.495 [2024-07-15 15:35:34.021749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.495 [2024-07-15 15:35:34.021757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.495 [2024-07-15 15:35:34.021764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.495 [2024-07-15 15:35:34.025256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.495 [2024-07-15 15:35:34.034511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.495 [2024-07-15 15:35:34.035047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.495 [2024-07-15 15:35:34.035063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.495 [2024-07-15 15:35:34.035070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.495 [2024-07-15 15:35:34.035286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.495 [2024-07-15 15:35:34.035501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.495 [2024-07-15 15:35:34.035508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.495 [2024-07-15 15:35:34.035515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.495 [2024-07-15 15:35:34.039031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.495 [2024-07-15 15:35:34.048295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.495 [2024-07-15 15:35:34.048869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.495 [2024-07-15 15:35:34.048888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.495 [2024-07-15 15:35:34.048896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.495 [2024-07-15 15:35:34.049112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.495 [2024-07-15 15:35:34.049327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.495 [2024-07-15 15:35:34.049335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.495 [2024-07-15 15:35:34.049341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.495 [2024-07-15 15:35:34.052831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.495 [2024-07-15 15:35:34.062091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.495 [2024-07-15 15:35:34.062734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.495 [2024-07-15 15:35:34.062771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.495 [2024-07-15 15:35:34.062781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.495 [2024-07-15 15:35:34.063026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.495 [2024-07-15 15:35:34.063246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.495 [2024-07-15 15:35:34.063254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.495 [2024-07-15 15:35:34.063261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.495 [2024-07-15 15:35:34.066749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.495 [2024-07-15 15:35:34.076029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.495 [2024-07-15 15:35:34.076696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.495 [2024-07-15 15:35:34.076732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.495 [2024-07-15 15:35:34.076742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.495 [2024-07-15 15:35:34.076988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.495 [2024-07-15 15:35:34.077208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.495 [2024-07-15 15:35:34.077217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.495 [2024-07-15 15:35:34.077224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.495 [2024-07-15 15:35:34.080718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.495 [2024-07-15 15:35:34.089789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.495 [2024-07-15 15:35:34.090453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.495 [2024-07-15 15:35:34.090490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.495 [2024-07-15 15:35:34.090500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.495 [2024-07-15 15:35:34.090736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.495 [2024-07-15 15:35:34.090965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.495 [2024-07-15 15:35:34.090974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.495 [2024-07-15 15:35:34.090981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.495 [2024-07-15 15:35:34.094475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.495 [2024-07-15 15:35:34.103542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.495 [2024-07-15 15:35:34.104185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.495 [2024-07-15 15:35:34.104221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.495 [2024-07-15 15:35:34.104231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.495 [2024-07-15 15:35:34.104472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.495 [2024-07-15 15:35:34.104691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.495 [2024-07-15 15:35:34.104699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.495 [2024-07-15 15:35:34.104706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.495 [2024-07-15 15:35:34.108209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.755 [2024-07-15 15:35:34.117289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.755 [2024-07-15 15:35:34.117993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.755 [2024-07-15 15:35:34.118030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.755 [2024-07-15 15:35:34.118040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.755 [2024-07-15 15:35:34.118276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.755 [2024-07-15 15:35:34.118496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.755 [2024-07-15 15:35:34.118504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.755 [2024-07-15 15:35:34.118511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.755 [2024-07-15 15:35:34.122010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.755 [2024-07-15 15:35:34.131062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.755 [2024-07-15 15:35:34.131740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.755 [2024-07-15 15:35:34.131777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.755 [2024-07-15 15:35:34.131787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.755 [2024-07-15 15:35:34.132030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.755 [2024-07-15 15:35:34.132251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.755 [2024-07-15 15:35:34.132259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.755 [2024-07-15 15:35:34.132266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.755 [2024-07-15 15:35:34.135758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.755 [2024-07-15 15:35:34.144859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.755 [2024-07-15 15:35:34.145409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.755 [2024-07-15 15:35:34.145427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.755 [2024-07-15 15:35:34.145435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.755 [2024-07-15 15:35:34.145652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.756 [2024-07-15 15:35:34.145868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.756 [2024-07-15 15:35:34.145875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.756 [2024-07-15 15:35:34.145893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.756 [2024-07-15 15:35:34.149383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.756 [2024-07-15 15:35:34.158636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.756 [2024-07-15 15:35:34.159177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.756 [2024-07-15 15:35:34.159193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.756 [2024-07-15 15:35:34.159201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.756 [2024-07-15 15:35:34.159416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.756 [2024-07-15 15:35:34.159632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.756 [2024-07-15 15:35:34.159640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.756 [2024-07-15 15:35:34.159646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.756 [2024-07-15 15:35:34.163140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.756 [2024-07-15 15:35:34.172391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.756 [2024-07-15 15:35:34.173087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.756 [2024-07-15 15:35:34.173123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.756 [2024-07-15 15:35:34.173133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.756 [2024-07-15 15:35:34.173369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.756 [2024-07-15 15:35:34.173588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.756 [2024-07-15 15:35:34.173596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.756 [2024-07-15 15:35:34.173604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.756 [2024-07-15 15:35:34.177109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.756 [2024-07-15 15:35:34.186170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.756 [2024-07-15 15:35:34.186681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.756 [2024-07-15 15:35:34.186717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.756 [2024-07-15 15:35:34.186727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.756 [2024-07-15 15:35:34.186972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.756 [2024-07-15 15:35:34.187193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.756 [2024-07-15 15:35:34.187201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.756 [2024-07-15 15:35:34.187208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.756 [2024-07-15 15:35:34.190701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.756 [2024-07-15 15:35:34.199964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.756 [2024-07-15 15:35:34.200617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.756 [2024-07-15 15:35:34.200653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.756 [2024-07-15 15:35:34.200663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.756 [2024-07-15 15:35:34.200908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.756 [2024-07-15 15:35:34.201128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.756 [2024-07-15 15:35:34.201136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.756 [2024-07-15 15:35:34.201144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.756 [2024-07-15 15:35:34.204635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.756 [2024-07-15 15:35:34.213896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.756 [2024-07-15 15:35:34.214562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.756 [2024-07-15 15:35:34.214598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.756 [2024-07-15 15:35:34.214609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.756 [2024-07-15 15:35:34.214845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.756 [2024-07-15 15:35:34.215076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.756 [2024-07-15 15:35:34.215086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.756 [2024-07-15 15:35:34.215093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.756 [2024-07-15 15:35:34.218586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.756 [2024-07-15 15:35:34.227644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.756 [2024-07-15 15:35:34.228365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.756 [2024-07-15 15:35:34.228402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.756 [2024-07-15 15:35:34.228412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.756 [2024-07-15 15:35:34.228648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.756 [2024-07-15 15:35:34.228867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.756 [2024-07-15 15:35:34.228875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.756 [2024-07-15 15:35:34.228882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.756 [2024-07-15 15:35:34.232387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.756 [2024-07-15 15:35:34.241441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.756 [2024-07-15 15:35:34.242063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.756 [2024-07-15 15:35:34.242100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.756 [2024-07-15 15:35:34.242110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.756 [2024-07-15 15:35:34.242350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.756 [2024-07-15 15:35:34.242570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.756 [2024-07-15 15:35:34.242578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.756 [2024-07-15 15:35:34.242585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.756 [2024-07-15 15:35:34.246086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.756 [2024-07-15 15:35:34.255176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.756 [2024-07-15 15:35:34.255828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.756 [2024-07-15 15:35:34.255864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.756 [2024-07-15 15:35:34.255874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.756 [2024-07-15 15:35:34.256119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.756 [2024-07-15 15:35:34.256338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.756 [2024-07-15 15:35:34.256347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.756 [2024-07-15 15:35:34.256354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.756 [2024-07-15 15:35:34.259848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.756 [2024-07-15 15:35:34.269111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.756 [2024-07-15 15:35:34.269770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.756 [2024-07-15 15:35:34.269806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.756 [2024-07-15 15:35:34.269816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.756 [2024-07-15 15:35:34.270063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.756 [2024-07-15 15:35:34.270283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.756 [2024-07-15 15:35:34.270291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.756 [2024-07-15 15:35:34.270299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.757 [2024-07-15 15:35:34.273796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.757 [2024-07-15 15:35:34.282878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.757 [2024-07-15 15:35:34.283537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.757 [2024-07-15 15:35:34.283573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.757 [2024-07-15 15:35:34.283584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.757 [2024-07-15 15:35:34.283820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.757 [2024-07-15 15:35:34.284049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.757 [2024-07-15 15:35:34.284058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.757 [2024-07-15 15:35:34.284070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.757 [2024-07-15 15:35:34.287572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.757 [2024-07-15 15:35:34.296645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.757 [2024-07-15 15:35:34.297312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.757 [2024-07-15 15:35:34.297348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.757 [2024-07-15 15:35:34.297358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.757 [2024-07-15 15:35:34.297594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.757 [2024-07-15 15:35:34.297814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.757 [2024-07-15 15:35:34.297822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.757 [2024-07-15 15:35:34.297829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.757 [2024-07-15 15:35:34.301335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.757 [2024-07-15 15:35:34.310399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.757 [2024-07-15 15:35:34.310954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.757 [2024-07-15 15:35:34.310973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.757 [2024-07-15 15:35:34.310980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.757 [2024-07-15 15:35:34.311197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.757 [2024-07-15 15:35:34.311412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.757 [2024-07-15 15:35:34.311420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.757 [2024-07-15 15:35:34.311427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.757 [2024-07-15 15:35:34.314923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.757 [2024-07-15 15:35:34.324189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.757 [2024-07-15 15:35:34.324673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.757 [2024-07-15 15:35:34.324688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.757 [2024-07-15 15:35:34.324695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.757 [2024-07-15 15:35:34.324916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.757 [2024-07-15 15:35:34.325133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.757 [2024-07-15 15:35:34.325141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.757 [2024-07-15 15:35:34.325148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.757 [2024-07-15 15:35:34.328632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.757 [2024-07-15 15:35:34.338093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.757 [2024-07-15 15:35:34.338628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.757 [2024-07-15 15:35:34.338651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.757 [2024-07-15 15:35:34.338658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.757 [2024-07-15 15:35:34.338874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.757 [2024-07-15 15:35:34.339096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.757 [2024-07-15 15:35:34.339104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.757 [2024-07-15 15:35:34.339111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.757 [2024-07-15 15:35:34.342597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.757 [2024-07-15 15:35:34.351846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.757 [2024-07-15 15:35:34.352423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.757 [2024-07-15 15:35:34.352438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.757 [2024-07-15 15:35:34.352445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.757 [2024-07-15 15:35:34.352660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.757 [2024-07-15 15:35:34.352875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.757 [2024-07-15 15:35:34.352882] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.757 [2024-07-15 15:35:34.352894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.757 [2024-07-15 15:35:34.356378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:24.757 [2024-07-15 15:35:34.365629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:24.757 [2024-07-15 15:35:34.366258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.757 [2024-07-15 15:35:34.366295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:24.757 [2024-07-15 15:35:34.366305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:24.757 [2024-07-15 15:35:34.366541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:24.757 [2024-07-15 15:35:34.366760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:24.757 [2024-07-15 15:35:34.366768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:24.757 [2024-07-15 15:35:34.366776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:24.757 [2024-07-15 15:35:34.370276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.017 [2024-07-15 15:35:34.379547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.017 [2024-07-15 15:35:34.380215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.017 [2024-07-15 15:35:34.380251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.017 [2024-07-15 15:35:34.380261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.017 [2024-07-15 15:35:34.380497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.017 [2024-07-15 15:35:34.380720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.017 [2024-07-15 15:35:34.380729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.017 [2024-07-15 15:35:34.380736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.017 [2024-07-15 15:35:34.384246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.017 [2024-07-15 15:35:34.393298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.018 [2024-07-15 15:35:34.393961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.018 [2024-07-15 15:35:34.393998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.018 [2024-07-15 15:35:34.394009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.018 [2024-07-15 15:35:34.394247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.018 [2024-07-15 15:35:34.394466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.018 [2024-07-15 15:35:34.394474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.018 [2024-07-15 15:35:34.394481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.018 [2024-07-15 15:35:34.397982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.018 [2024-07-15 15:35:34.407035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.018 [2024-07-15 15:35:34.407678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.018 [2024-07-15 15:35:34.407715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.018 [2024-07-15 15:35:34.407725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.018 [2024-07-15 15:35:34.407969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.018 [2024-07-15 15:35:34.408189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.018 [2024-07-15 15:35:34.408197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.018 [2024-07-15 15:35:34.408204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.018 [2024-07-15 15:35:34.411698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.018 [2024-07-15 15:35:34.420970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.018 [2024-07-15 15:35:34.421623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.018 [2024-07-15 15:35:34.421659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.018 [2024-07-15 15:35:34.421669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.018 [2024-07-15 15:35:34.421914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.018 [2024-07-15 15:35:34.422134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.018 [2024-07-15 15:35:34.422142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.018 [2024-07-15 15:35:34.422149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.018 [2024-07-15 15:35:34.425652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.018 [2024-07-15 15:35:34.434708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.018 [2024-07-15 15:35:34.435283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.018 [2024-07-15 15:35:34.435301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.018 [2024-07-15 15:35:34.435309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.018 [2024-07-15 15:35:34.435526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.018 [2024-07-15 15:35:34.435741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.018 [2024-07-15 15:35:34.435749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.018 [2024-07-15 15:35:34.435756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.018 [2024-07-15 15:35:34.439265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.018 [2024-07-15 15:35:34.448516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.018 [2024-07-15 15:35:34.449179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.018 [2024-07-15 15:35:34.449215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.018 [2024-07-15 15:35:34.449226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.018 [2024-07-15 15:35:34.449461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.018 [2024-07-15 15:35:34.449681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.018 [2024-07-15 15:35:34.449689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.018 [2024-07-15 15:35:34.449696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.018 [2024-07-15 15:35:34.453196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.018 [2024-07-15 15:35:34.462279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.018 [2024-07-15 15:35:34.462902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.018 [2024-07-15 15:35:34.462921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.018 [2024-07-15 15:35:34.462929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.018 [2024-07-15 15:35:34.463145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.018 [2024-07-15 15:35:34.463360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.018 [2024-07-15 15:35:34.463368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.018 [2024-07-15 15:35:34.463375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.018 [2024-07-15 15:35:34.466863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.018 [2024-07-15 15:35:34.476118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.018 [2024-07-15 15:35:34.476786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.018 [2024-07-15 15:35:34.476822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.018 [2024-07-15 15:35:34.476836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.018 [2024-07-15 15:35:34.477082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.018 [2024-07-15 15:35:34.477302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.018 [2024-07-15 15:35:34.477310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.018 [2024-07-15 15:35:34.477317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.018 [2024-07-15 15:35:34.480810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.018 [2024-07-15 15:35:34.489877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.018 [2024-07-15 15:35:34.490532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.018 [2024-07-15 15:35:34.490568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.018 [2024-07-15 15:35:34.490578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.018 [2024-07-15 15:35:34.490814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.018 [2024-07-15 15:35:34.491043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.018 [2024-07-15 15:35:34.491053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.018 [2024-07-15 15:35:34.491060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.018 [2024-07-15 15:35:34.494553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.018 [2024-07-15 15:35:34.503611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.018 [2024-07-15 15:35:34.504276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.018 [2024-07-15 15:35:34.504312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.018 [2024-07-15 15:35:34.504322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.018 [2024-07-15 15:35:34.504558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.018 [2024-07-15 15:35:34.504777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.018 [2024-07-15 15:35:34.504785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.018 [2024-07-15 15:35:34.504793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.018 [2024-07-15 15:35:34.508295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.018 [2024-07-15 15:35:34.517354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.018 [2024-07-15 15:35:34.518065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.018 [2024-07-15 15:35:34.518101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.018 [2024-07-15 15:35:34.518112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.018 [2024-07-15 15:35:34.518347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.018 [2024-07-15 15:35:34.518566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.018 [2024-07-15 15:35:34.518578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.018 [2024-07-15 15:35:34.518586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.018 [2024-07-15 15:35:34.522089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.018 [2024-07-15 15:35:34.531148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.018 [2024-07-15 15:35:34.531816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.018 [2024-07-15 15:35:34.531853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.018 [2024-07-15 15:35:34.531864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.018 [2024-07-15 15:35:34.532113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.018 [2024-07-15 15:35:34.532333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.018 [2024-07-15 15:35:34.532341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.018 [2024-07-15 15:35:34.532348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.018 [2024-07-15 15:35:34.535843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.018 [2024-07-15 15:35:34.544901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.019 [2024-07-15 15:35:34.545575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.019 [2024-07-15 15:35:34.545611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.019 [2024-07-15 15:35:34.545621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.019 [2024-07-15 15:35:34.545857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.019 [2024-07-15 15:35:34.546086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.019 [2024-07-15 15:35:34.546095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.019 [2024-07-15 15:35:34.546102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.019 [2024-07-15 15:35:34.549592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.019 [2024-07-15 15:35:34.558646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.019 [2024-07-15 15:35:34.559302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.019 [2024-07-15 15:35:34.559339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.019 [2024-07-15 15:35:34.559349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.019 [2024-07-15 15:35:34.559585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.019 [2024-07-15 15:35:34.559803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.019 [2024-07-15 15:35:34.559812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.019 [2024-07-15 15:35:34.559819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.019 [2024-07-15 15:35:34.563322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.019 [2024-07-15 15:35:34.572382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.019 [2024-07-15 15:35:34.573021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.019 [2024-07-15 15:35:34.573057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.019 [2024-07-15 15:35:34.573067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.019 [2024-07-15 15:35:34.573303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.019 [2024-07-15 15:35:34.573522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.019 [2024-07-15 15:35:34.573530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.019 [2024-07-15 15:35:34.573538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.019 [2024-07-15 15:35:34.577039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.019 [2024-07-15 15:35:34.586309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.019 [2024-07-15 15:35:34.586923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.019 [2024-07-15 15:35:34.586959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.019 [2024-07-15 15:35:34.586971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.019 [2024-07-15 15:35:34.587207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.019 [2024-07-15 15:35:34.587426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.019 [2024-07-15 15:35:34.587434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.019 [2024-07-15 15:35:34.587441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.019 [2024-07-15 15:35:34.590943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.019 [2024-07-15 15:35:34.600201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.019 [2024-07-15 15:35:34.600596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.019 [2024-07-15 15:35:34.600615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.019 [2024-07-15 15:35:34.600623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.019 [2024-07-15 15:35:34.600840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.019 [2024-07-15 15:35:34.601064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.019 [2024-07-15 15:35:34.601073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.019 [2024-07-15 15:35:34.601080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.019 [2024-07-15 15:35:34.604568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.019 [2024-07-15 15:35:34.614029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.019 [2024-07-15 15:35:34.614642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.019 [2024-07-15 15:35:34.614679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.019 [2024-07-15 15:35:34.614689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.019 [2024-07-15 15:35:34.614938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.019 [2024-07-15 15:35:34.615166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.019 [2024-07-15 15:35:34.615174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.019 [2024-07-15 15:35:34.615181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.019 [2024-07-15 15:35:34.618674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.019 [2024-07-15 15:35:34.627955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.019 [2024-07-15 15:35:34.628531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.019 [2024-07-15 15:35:34.628549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.019 [2024-07-15 15:35:34.628556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.019 [2024-07-15 15:35:34.628772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.019 [2024-07-15 15:35:34.628993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.019 [2024-07-15 15:35:34.629002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.019 [2024-07-15 15:35:34.629009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.019 [2024-07-15 15:35:34.632507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.281 [2024-07-15 15:35:34.641773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.281 [2024-07-15 15:35:34.642339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.281 [2024-07-15 15:35:34.642354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.281 [2024-07-15 15:35:34.642362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.281 [2024-07-15 15:35:34.642577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.281 [2024-07-15 15:35:34.642792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.281 [2024-07-15 15:35:34.642800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.281 [2024-07-15 15:35:34.642806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.281 [2024-07-15 15:35:34.646302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.281 [2024-07-15 15:35:34.655552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.281 [2024-07-15 15:35:34.656178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.281 [2024-07-15 15:35:34.656214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.281 [2024-07-15 15:35:34.656225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.281 [2024-07-15 15:35:34.656461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.281 [2024-07-15 15:35:34.656680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.281 [2024-07-15 15:35:34.656689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.281 [2024-07-15 15:35:34.656701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.281 [2024-07-15 15:35:34.660205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.281 [2024-07-15 15:35:34.669291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.281 [2024-07-15 15:35:34.669889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.281 [2024-07-15 15:35:34.669907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.281 [2024-07-15 15:35:34.669915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.281 [2024-07-15 15:35:34.670131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.281 [2024-07-15 15:35:34.670346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.281 [2024-07-15 15:35:34.670354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.281 [2024-07-15 15:35:34.670361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.281 [2024-07-15 15:35:34.673851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.281 [2024-07-15 15:35:34.683116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.281 [2024-07-15 15:35:34.683659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.281 [2024-07-15 15:35:34.683695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.281 [2024-07-15 15:35:34.683705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.281 [2024-07-15 15:35:34.683950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.281 [2024-07-15 15:35:34.684170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.281 [2024-07-15 15:35:34.684178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.281 [2024-07-15 15:35:34.684185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.281 [2024-07-15 15:35:34.687676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.281 [2024-07-15 15:35:34.696950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.281 [2024-07-15 15:35:34.697600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.281 [2024-07-15 15:35:34.697637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.281 [2024-07-15 15:35:34.697647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.281 [2024-07-15 15:35:34.697892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.281 [2024-07-15 15:35:34.698112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.281 [2024-07-15 15:35:34.698120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.281 [2024-07-15 15:35:34.698128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.281 [2024-07-15 15:35:34.701626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.281 [2024-07-15 15:35:34.710700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.281 [2024-07-15 15:35:34.711361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.281 [2024-07-15 15:35:34.711397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.281 [2024-07-15 15:35:34.711408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.281 [2024-07-15 15:35:34.711644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.281 [2024-07-15 15:35:34.711863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.281 [2024-07-15 15:35:34.711872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.281 [2024-07-15 15:35:34.711879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.281 [2024-07-15 15:35:34.715385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.281 [2024-07-15 15:35:34.724453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.281 [2024-07-15 15:35:34.725029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.281 [2024-07-15 15:35:34.725066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.281 [2024-07-15 15:35:34.725077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.281 [2024-07-15 15:35:34.725317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.281 [2024-07-15 15:35:34.725537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.281 [2024-07-15 15:35:34.725545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.281 [2024-07-15 15:35:34.725552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.281 [2024-07-15 15:35:34.729054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.281 [2024-07-15 15:35:34.738320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.281 [2024-07-15 15:35:34.738987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.281 [2024-07-15 15:35:34.739023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.281 [2024-07-15 15:35:34.739035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.281 [2024-07-15 15:35:34.739274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.281 [2024-07-15 15:35:34.739493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.281 [2024-07-15 15:35:34.739502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.281 [2024-07-15 15:35:34.739509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.281 [2024-07-15 15:35:34.743011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.281 [2024-07-15 15:35:34.752070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.281 [2024-07-15 15:35:34.752609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.281 [2024-07-15 15:35:34.752644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.282 [2024-07-15 15:35:34.752655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.282 [2024-07-15 15:35:34.752906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.282 [2024-07-15 15:35:34.753130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.282 [2024-07-15 15:35:34.753139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.282 [2024-07-15 15:35:34.753146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.282 [2024-07-15 15:35:34.756640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.282 [2024-07-15 15:35:34.765918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.282 [2024-07-15 15:35:34.766589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.282 [2024-07-15 15:35:34.766626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.282 [2024-07-15 15:35:34.766636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.282 [2024-07-15 15:35:34.766873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.282 [2024-07-15 15:35:34.767100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.282 [2024-07-15 15:35:34.767109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.282 [2024-07-15 15:35:34.767116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.282 [2024-07-15 15:35:34.770612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.282 [2024-07-15 15:35:34.779671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.282 [2024-07-15 15:35:34.780171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.282 [2024-07-15 15:35:34.780208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.282 [2024-07-15 15:35:34.780218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.282 [2024-07-15 15:35:34.780454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.282 [2024-07-15 15:35:34.780673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.282 [2024-07-15 15:35:34.780681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.282 [2024-07-15 15:35:34.780688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.282 [2024-07-15 15:35:34.784200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.282 [2024-07-15 15:35:34.793466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.282 [2024-07-15 15:35:34.794134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.282 [2024-07-15 15:35:34.794170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.282 [2024-07-15 15:35:34.794180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.282 [2024-07-15 15:35:34.794416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.282 [2024-07-15 15:35:34.794635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.282 [2024-07-15 15:35:34.794643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.282 [2024-07-15 15:35:34.794651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.282 [2024-07-15 15:35:34.798160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.282 [2024-07-15 15:35:34.807213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.282 [2024-07-15 15:35:34.807860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.282 [2024-07-15 15:35:34.807903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.282 [2024-07-15 15:35:34.807914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.282 [2024-07-15 15:35:34.808150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.282 [2024-07-15 15:35:34.808369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.282 [2024-07-15 15:35:34.808377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.282 [2024-07-15 15:35:34.808384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.282 [2024-07-15 15:35:34.811877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.282 [2024-07-15 15:35:34.821137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.282 [2024-07-15 15:35:34.821793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.282 [2024-07-15 15:35:34.821830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.282 [2024-07-15 15:35:34.821841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.282 [2024-07-15 15:35:34.822089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.282 [2024-07-15 15:35:34.822309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.282 [2024-07-15 15:35:34.822317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.282 [2024-07-15 15:35:34.822325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.282 [2024-07-15 15:35:34.825816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.282 [2024-07-15 15:35:34.834877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.282 [2024-07-15 15:35:34.835553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.282 [2024-07-15 15:35:34.835589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.282 [2024-07-15 15:35:34.835600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.282 [2024-07-15 15:35:34.835835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.282 [2024-07-15 15:35:34.836064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.282 [2024-07-15 15:35:34.836073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.282 [2024-07-15 15:35:34.836080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.282 [2024-07-15 15:35:34.839577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.282 [2024-07-15 15:35:34.848632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.282 [2024-07-15 15:35:34.849319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.282 [2024-07-15 15:35:34.849360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.282 [2024-07-15 15:35:34.849371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.282 [2024-07-15 15:35:34.849607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.282 [2024-07-15 15:35:34.849828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.282 [2024-07-15 15:35:34.849836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.282 [2024-07-15 15:35:34.849844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.282 [2024-07-15 15:35:34.853349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.282 [2024-07-15 15:35:34.862425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.282 [2024-07-15 15:35:34.863175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.282 [2024-07-15 15:35:34.863212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.282 [2024-07-15 15:35:34.863222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.282 [2024-07-15 15:35:34.863458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.282 [2024-07-15 15:35:34.863678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.282 [2024-07-15 15:35:34.863686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.282 [2024-07-15 15:35:34.863693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.282 [2024-07-15 15:35:34.867194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.282 [2024-07-15 15:35:34.876277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.282 [2024-07-15 15:35:34.876900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.282 [2024-07-15 15:35:34.876937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.282 [2024-07-15 15:35:34.876949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.282 [2024-07-15 15:35:34.877188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.282 [2024-07-15 15:35:34.877407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.282 [2024-07-15 15:35:34.877415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.282 [2024-07-15 15:35:34.877423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.282 [2024-07-15 15:35:34.880919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.282 [2024-07-15 15:35:34.890194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.282 [2024-07-15 15:35:34.890783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.282 [2024-07-15 15:35:34.890801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.282 [2024-07-15 15:35:34.890808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.282 [2024-07-15 15:35:34.891032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.282 [2024-07-15 15:35:34.891253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.282 [2024-07-15 15:35:34.891261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.282 [2024-07-15 15:35:34.891268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.282 [2024-07-15 15:35:34.894759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.545 [2024-07-15 15:35:34.904038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.545 [2024-07-15 15:35:34.904659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.545 [2024-07-15 15:35:34.904695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.545 [2024-07-15 15:35:34.904705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.545 [2024-07-15 15:35:34.905013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.545 [2024-07-15 15:35:34.905234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.545 [2024-07-15 15:35:34.905242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.545 [2024-07-15 15:35:34.905249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.545 [2024-07-15 15:35:34.908750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.545 [2024-07-15 15:35:34.917823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.545 [2024-07-15 15:35:34.918371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.545 [2024-07-15 15:35:34.918389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.545 [2024-07-15 15:35:34.918397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.545 [2024-07-15 15:35:34.918613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.545 [2024-07-15 15:35:34.918829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.545 [2024-07-15 15:35:34.918837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.545 [2024-07-15 15:35:34.918844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.545 [2024-07-15 15:35:34.922345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.545 [2024-07-15 15:35:34.931619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.545 [2024-07-15 15:35:34.932152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.545 [2024-07-15 15:35:34.932168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.545 [2024-07-15 15:35:34.932175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.545 [2024-07-15 15:35:34.932390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.545 [2024-07-15 15:35:34.932606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.545 [2024-07-15 15:35:34.932613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.545 [2024-07-15 15:35:34.932620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.545 [2024-07-15 15:35:34.936121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.545 [2024-07-15 15:35:34.945408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.545 [2024-07-15 15:35:34.945956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.545 [2024-07-15 15:35:34.945972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.545 [2024-07-15 15:35:34.945979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.545 [2024-07-15 15:35:34.946194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.545 [2024-07-15 15:35:34.946410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.545 [2024-07-15 15:35:34.946418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.545 [2024-07-15 15:35:34.946425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.545 [2024-07-15 15:35:34.949925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.545 [2024-07-15 15:35:34.959206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.545 [2024-07-15 15:35:34.959736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.545 [2024-07-15 15:35:34.959751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.545 [2024-07-15 15:35:34.959758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.546 [2024-07-15 15:35:34.959979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.546 [2024-07-15 15:35:34.960196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.546 [2024-07-15 15:35:34.960204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.546 [2024-07-15 15:35:34.960211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.546 [2024-07-15 15:35:34.963704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.546 [2024-07-15 15:35:34.972988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.546 [2024-07-15 15:35:34.973593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.546 [2024-07-15 15:35:34.973629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.546 [2024-07-15 15:35:34.973639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.546 [2024-07-15 15:35:34.973875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.546 [2024-07-15 15:35:34.974104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.546 [2024-07-15 15:35:34.974113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.546 [2024-07-15 15:35:34.974120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.546 [2024-07-15 15:35:34.977615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.546 [2024-07-15 15:35:34.986901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.546 [2024-07-15 15:35:34.987460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.546 [2024-07-15 15:35:34.987479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.546 [2024-07-15 15:35:34.987490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.546 [2024-07-15 15:35:34.987707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.546 [2024-07-15 15:35:34.987930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.546 [2024-07-15 15:35:34.987938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.546 [2024-07-15 15:35:34.987945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.546 [2024-07-15 15:35:34.991439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.546 [2024-07-15 15:35:35.000708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.546 [2024-07-15 15:35:35.001361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.546 [2024-07-15 15:35:35.001398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.546 [2024-07-15 15:35:35.001408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.546 [2024-07-15 15:35:35.001644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.546 [2024-07-15 15:35:35.001863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.546 [2024-07-15 15:35:35.001871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.546 [2024-07-15 15:35:35.001879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.546 [2024-07-15 15:35:35.005384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.546 [2024-07-15 15:35:35.014457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.546 [2024-07-15 15:35:35.014983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.546 [2024-07-15 15:35:35.015019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.546 [2024-07-15 15:35:35.015030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.546 [2024-07-15 15:35:35.015265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.546 [2024-07-15 15:35:35.015486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.546 [2024-07-15 15:35:35.015494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.546 [2024-07-15 15:35:35.015501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.546 [2024-07-15 15:35:35.019007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.546 [2024-07-15 15:35:35.028296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.546 [2024-07-15 15:35:35.028947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.546 [2024-07-15 15:35:35.028985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.546 [2024-07-15 15:35:35.028996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.546 [2024-07-15 15:35:35.029234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.546 [2024-07-15 15:35:35.029453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.546 [2024-07-15 15:35:35.029469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.546 [2024-07-15 15:35:35.029476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.546 [2024-07-15 15:35:35.032997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.546 [2024-07-15 15:35:35.042070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.546 [2024-07-15 15:35:35.042747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.546 [2024-07-15 15:35:35.042784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.546 [2024-07-15 15:35:35.042794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.546 [2024-07-15 15:35:35.043039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.546 [2024-07-15 15:35:35.043259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.546 [2024-07-15 15:35:35.043267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.546 [2024-07-15 15:35:35.043274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.546 [2024-07-15 15:35:35.046766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.546 [2024-07-15 15:35:35.055831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.546 [2024-07-15 15:35:35.056513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.546 [2024-07-15 15:35:35.056550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.546 [2024-07-15 15:35:35.056560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.546 [2024-07-15 15:35:35.056796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.546 [2024-07-15 15:35:35.057023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.546 [2024-07-15 15:35:35.057032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.546 [2024-07-15 15:35:35.057040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.546 [2024-07-15 15:35:35.060535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.546 [2024-07-15 15:35:35.069593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.546 [2024-07-15 15:35:35.070072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.546 [2024-07-15 15:35:35.070090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.546 [2024-07-15 15:35:35.070098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.546 [2024-07-15 15:35:35.070314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.546 [2024-07-15 15:35:35.070530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.546 [2024-07-15 15:35:35.070537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.546 [2024-07-15 15:35:35.070544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.546 [2024-07-15 15:35:35.074039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.546 [2024-07-15 15:35:35.083546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.546 [2024-07-15 15:35:35.084087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.546 [2024-07-15 15:35:35.084104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.546 [2024-07-15 15:35:35.084111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.546 [2024-07-15 15:35:35.084327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.546 [2024-07-15 15:35:35.084542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.546 [2024-07-15 15:35:35.084550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.546 [2024-07-15 15:35:35.084556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.546 [2024-07-15 15:35:35.088049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.546 [2024-07-15 15:35:35.097311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.546 [2024-07-15 15:35:35.097868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.546 [2024-07-15 15:35:35.097888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.546 [2024-07-15 15:35:35.097895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.546 [2024-07-15 15:35:35.098111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.546 [2024-07-15 15:35:35.098326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.546 [2024-07-15 15:35:35.098334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.547 [2024-07-15 15:35:35.098341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.547 [2024-07-15 15:35:35.101830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.547 [2024-07-15 15:35:35.111096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.547 [2024-07-15 15:35:35.111574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.547 [2024-07-15 15:35:35.111589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.547 [2024-07-15 15:35:35.111596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.547 [2024-07-15 15:35:35.111811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.547 [2024-07-15 15:35:35.112033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.547 [2024-07-15 15:35:35.112047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.547 [2024-07-15 15:35:35.112054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.547 [2024-07-15 15:35:35.115540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.547 [2024-07-15 15:35:35.125010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.547 [2024-07-15 15:35:35.125582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.547 [2024-07-15 15:35:35.125597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.547 [2024-07-15 15:35:35.125604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.547 [2024-07-15 15:35:35.125823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.547 [2024-07-15 15:35:35.126043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.547 [2024-07-15 15:35:35.126053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.547 [2024-07-15 15:35:35.126060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.547 [2024-07-15 15:35:35.129549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.547 [2024-07-15 15:35:35.138999] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.547 [2024-07-15 15:35:35.139531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.547 [2024-07-15 15:35:35.139547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.547 [2024-07-15 15:35:35.139554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.547 [2024-07-15 15:35:35.139770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.547 [2024-07-15 15:35:35.139989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.547 [2024-07-15 15:35:35.139997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.547 [2024-07-15 15:35:35.140004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.547 [2024-07-15 15:35:35.143493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.547 [2024-07-15 15:35:35.152759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.547 [2024-07-15 15:35:35.153303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.547 [2024-07-15 15:35:35.153319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.547 [2024-07-15 15:35:35.153326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.547 [2024-07-15 15:35:35.153541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.547 [2024-07-15 15:35:35.153756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.547 [2024-07-15 15:35:35.153764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.547 [2024-07-15 15:35:35.153770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.547 [2024-07-15 15:35:35.157261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.809 [2024-07-15 15:35:35.166522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.810 [2024-07-15 15:35:35.167069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.810 [2024-07-15 15:35:35.167087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.810 [2024-07-15 15:35:35.167094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.810 [2024-07-15 15:35:35.167311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.810 [2024-07-15 15:35:35.167526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.810 [2024-07-15 15:35:35.167534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.810 [2024-07-15 15:35:35.167544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.810 [2024-07-15 15:35:35.171038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.810 [2024-07-15 15:35:35.180299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.810 [2024-07-15 15:35:35.180924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.810 [2024-07-15 15:35:35.180961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.810 [2024-07-15 15:35:35.180972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.810 [2024-07-15 15:35:35.181212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.810 [2024-07-15 15:35:35.181431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.810 [2024-07-15 15:35:35.181439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.810 [2024-07-15 15:35:35.181447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.810 [2024-07-15 15:35:35.184957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.810 [2024-07-15 15:35:35.194224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.810 [2024-07-15 15:35:35.194780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.810 [2024-07-15 15:35:35.194799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.810 [2024-07-15 15:35:35.194806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.810 [2024-07-15 15:35:35.195028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.810 [2024-07-15 15:35:35.195244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.810 [2024-07-15 15:35:35.195251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.810 [2024-07-15 15:35:35.195258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.810 [2024-07-15 15:35:35.198748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.810 [2024-07-15 15:35:35.208010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.810 [2024-07-15 15:35:35.208578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.810 [2024-07-15 15:35:35.208592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.810 [2024-07-15 15:35:35.208600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.810 [2024-07-15 15:35:35.208815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.810 [2024-07-15 15:35:35.209035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.810 [2024-07-15 15:35:35.209043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.810 [2024-07-15 15:35:35.209049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.810 [2024-07-15 15:35:35.212537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.810 [2024-07-15 15:35:35.221796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.810 [2024-07-15 15:35:35.222453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.810 [2024-07-15 15:35:35.222494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.810 [2024-07-15 15:35:35.222505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.810 [2024-07-15 15:35:35.222741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.810 [2024-07-15 15:35:35.222967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.810 [2024-07-15 15:35:35.222976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.810 [2024-07-15 15:35:35.222983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.810 [2024-07-15 15:35:35.226482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.810 [2024-07-15 15:35:35.235547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.810 [2024-07-15 15:35:35.236235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.810 [2024-07-15 15:35:35.236271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.810 [2024-07-15 15:35:35.236281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.810 [2024-07-15 15:35:35.236517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.810 [2024-07-15 15:35:35.236736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.810 [2024-07-15 15:35:35.236744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.810 [2024-07-15 15:35:35.236752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.810 [2024-07-15 15:35:35.240253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.810 [2024-07-15 15:35:35.249315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.810 [2024-07-15 15:35:35.249761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.810 [2024-07-15 15:35:35.249779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.810 [2024-07-15 15:35:35.249786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.810 [2024-07-15 15:35:35.250008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.810 [2024-07-15 15:35:35.250224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.810 [2024-07-15 15:35:35.250232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.810 [2024-07-15 15:35:35.250239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.810 [2024-07-15 15:35:35.253727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.810 [2024-07-15 15:35:35.263201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.810 [2024-07-15 15:35:35.263732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.810 [2024-07-15 15:35:35.263747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.810 [2024-07-15 15:35:35.263755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.810 [2024-07-15 15:35:35.263975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.810 [2024-07-15 15:35:35.264195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.810 [2024-07-15 15:35:35.264203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.810 [2024-07-15 15:35:35.264210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.810 [2024-07-15 15:35:35.267698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.810 [2024-07-15 15:35:35.276961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.810 [2024-07-15 15:35:35.277497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.810 [2024-07-15 15:35:35.277511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.810 [2024-07-15 15:35:35.277519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.810 [2024-07-15 15:35:35.277734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.810 [2024-07-15 15:35:35.277954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.810 [2024-07-15 15:35:35.277963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.810 [2024-07-15 15:35:35.277971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.810 [2024-07-15 15:35:35.281458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.810 [2024-07-15 15:35:35.290753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.810 [2024-07-15 15:35:35.291491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.810 [2024-07-15 15:35:35.291527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.810 [2024-07-15 15:35:35.291538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.810 [2024-07-15 15:35:35.291778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.810 [2024-07-15 15:35:35.292005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.810 [2024-07-15 15:35:35.292015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.810 [2024-07-15 15:35:35.292023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.810 [2024-07-15 15:35:35.295519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.810 [2024-07-15 15:35:35.304587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.810 [2024-07-15 15:35:35.305140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.810 [2024-07-15 15:35:35.305159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.810 [2024-07-15 15:35:35.305166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.810 [2024-07-15 15:35:35.305383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.810 [2024-07-15 15:35:35.305599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.810 [2024-07-15 15:35:35.305607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.810 [2024-07-15 15:35:35.305614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.811 [2024-07-15 15:35:35.309114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.811 [2024-07-15 15:35:35.318374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.811 [2024-07-15 15:35:35.319030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.811 [2024-07-15 15:35:35.319068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.811 [2024-07-15 15:35:35.319080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.811 [2024-07-15 15:35:35.319317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.811 [2024-07-15 15:35:35.319536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.811 [2024-07-15 15:35:35.319544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.811 [2024-07-15 15:35:35.319551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.811 [2024-07-15 15:35:35.323050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.811 [2024-07-15 15:35:35.332110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.811 [2024-07-15 15:35:35.332704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.811 [2024-07-15 15:35:35.332722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.811 [2024-07-15 15:35:35.332730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.811 [2024-07-15 15:35:35.332952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.811 [2024-07-15 15:35:35.333168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.811 [2024-07-15 15:35:35.333176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.811 [2024-07-15 15:35:35.333183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.811 [2024-07-15 15:35:35.336674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.811 [2024-07-15 15:35:35.345936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.811 [2024-07-15 15:35:35.346433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.811 [2024-07-15 15:35:35.346448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.811 [2024-07-15 15:35:35.346455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.811 [2024-07-15 15:35:35.346670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.811 [2024-07-15 15:35:35.346891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.811 [2024-07-15 15:35:35.346900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.811 [2024-07-15 15:35:35.346907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.811 [2024-07-15 15:35:35.350395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.811 [2024-07-15 15:35:35.359860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.811 [2024-07-15 15:35:35.360519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.811 [2024-07-15 15:35:35.360555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.811 [2024-07-15 15:35:35.360570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.811 [2024-07-15 15:35:35.360806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.811 [2024-07-15 15:35:35.361032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.811 [2024-07-15 15:35:35.361040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.811 [2024-07-15 15:35:35.361048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.811 [2024-07-15 15:35:35.364544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.811 [2024-07-15 15:35:35.373609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.811 [2024-07-15 15:35:35.374179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.811 [2024-07-15 15:35:35.374197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.811 [2024-07-15 15:35:35.374205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.811 [2024-07-15 15:35:35.374421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.811 [2024-07-15 15:35:35.374637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.811 [2024-07-15 15:35:35.374644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.811 [2024-07-15 15:35:35.374651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.811 [2024-07-15 15:35:35.378146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.811 [2024-07-15 15:35:35.387419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.811 [2024-07-15 15:35:35.387847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.811 [2024-07-15 15:35:35.387865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.811 [2024-07-15 15:35:35.387872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.811 [2024-07-15 15:35:35.388093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.811 [2024-07-15 15:35:35.388309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.811 [2024-07-15 15:35:35.388318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.811 [2024-07-15 15:35:35.388324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.811 [2024-07-15 15:35:35.391812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.811 [2024-07-15 15:35:35.401284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.811 [2024-07-15 15:35:35.401815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.811 [2024-07-15 15:35:35.401830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.811 [2024-07-15 15:35:35.401837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.811 [2024-07-15 15:35:35.402058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.811 [2024-07-15 15:35:35.402275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.811 [2024-07-15 15:35:35.402286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.811 [2024-07-15 15:35:35.402293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.811 [2024-07-15 15:35:35.405784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:25.811 [2024-07-15 15:35:35.415048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:25.811 [2024-07-15 15:35:35.415703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.811 [2024-07-15 15:35:35.415739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:25.811 [2024-07-15 15:35:35.415749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:25.811 [2024-07-15 15:35:35.415993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:25.811 [2024-07-15 15:35:35.416213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:25.811 [2024-07-15 15:35:35.416222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:25.811 [2024-07-15 15:35:35.416229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:25.811 [2024-07-15 15:35:35.419723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.072 [2024-07-15 15:35:35.428824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.072 [2024-07-15 15:35:35.429441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.072 [2024-07-15 15:35:35.429478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.072 [2024-07-15 15:35:35.429488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.072 [2024-07-15 15:35:35.429724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.072 [2024-07-15 15:35:35.429950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.073 [2024-07-15 15:35:35.429959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.073 [2024-07-15 15:35:35.429966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.073 [2024-07-15 15:35:35.433462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.073 [2024-07-15 15:35:35.442739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.073 [2024-07-15 15:35:35.443327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.073 [2024-07-15 15:35:35.443345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.073 [2024-07-15 15:35:35.443353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.073 [2024-07-15 15:35:35.443569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.073 [2024-07-15 15:35:35.443784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.073 [2024-07-15 15:35:35.443793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.073 [2024-07-15 15:35:35.443799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.073 [2024-07-15 15:35:35.447293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.073 [2024-07-15 15:35:35.456569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.073 [2024-07-15 15:35:35.457084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.073 [2024-07-15 15:35:35.457100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.073 [2024-07-15 15:35:35.457107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.073 [2024-07-15 15:35:35.457322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.073 [2024-07-15 15:35:35.457538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.073 [2024-07-15 15:35:35.457545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.073 [2024-07-15 15:35:35.457552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.073 [2024-07-15 15:35:35.461044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.073 [2024-07-15 15:35:35.470308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.073 [2024-07-15 15:35:35.470882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.073 [2024-07-15 15:35:35.470902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.073 [2024-07-15 15:35:35.470909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.073 [2024-07-15 15:35:35.471124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.073 [2024-07-15 15:35:35.471340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.073 [2024-07-15 15:35:35.471347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.073 [2024-07-15 15:35:35.471354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.073 [2024-07-15 15:35:35.474841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.073 [2024-07-15 15:35:35.484112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.073 [2024-07-15 15:35:35.484683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.073 [2024-07-15 15:35:35.484698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.073 [2024-07-15 15:35:35.484705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.073 [2024-07-15 15:35:35.484925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.073 [2024-07-15 15:35:35.485141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.073 [2024-07-15 15:35:35.485149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.073 [2024-07-15 15:35:35.485155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.073 [2024-07-15 15:35:35.488691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.073 [2024-07-15 15:35:35.497985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.073 [2024-07-15 15:35:35.498654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.073 [2024-07-15 15:35:35.498691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.073 [2024-07-15 15:35:35.498705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.073 [2024-07-15 15:35:35.498949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.073 [2024-07-15 15:35:35.499169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.073 [2024-07-15 15:35:35.499177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.073 [2024-07-15 15:35:35.499184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.073 [2024-07-15 15:35:35.502676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.073 [2024-07-15 15:35:35.511735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.073 [2024-07-15 15:35:35.512328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.073 [2024-07-15 15:35:35.512347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.073 [2024-07-15 15:35:35.512354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.073 [2024-07-15 15:35:35.512571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.073 [2024-07-15 15:35:35.512787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.073 [2024-07-15 15:35:35.512794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.073 [2024-07-15 15:35:35.512801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.073 [2024-07-15 15:35:35.516300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.073 [2024-07-15 15:35:35.525567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.073 [2024-07-15 15:35:35.526190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.073 [2024-07-15 15:35:35.526227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.073 [2024-07-15 15:35:35.526237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.073 [2024-07-15 15:35:35.526473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.073 [2024-07-15 15:35:35.526693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.073 [2024-07-15 15:35:35.526701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.073 [2024-07-15 15:35:35.526708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.073 [2024-07-15 15:35:35.530208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.073 [2024-07-15 15:35:35.539469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.073 [2024-07-15 15:35:35.540035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.073 [2024-07-15 15:35:35.540054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.073 [2024-07-15 15:35:35.540062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.073 [2024-07-15 15:35:35.540278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.073 [2024-07-15 15:35:35.540494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.073 [2024-07-15 15:35:35.540506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.073 [2024-07-15 15:35:35.540513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.073 [2024-07-15 15:35:35.544008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.073 [2024-07-15 15:35:35.553275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.073 [2024-07-15 15:35:35.553812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.073 [2024-07-15 15:35:35.553828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.073 [2024-07-15 15:35:35.553835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.073 [2024-07-15 15:35:35.554054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.073 [2024-07-15 15:35:35.554270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.073 [2024-07-15 15:35:35.554278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.073 [2024-07-15 15:35:35.554285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.073 [2024-07-15 15:35:35.557770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.073 [2024-07-15 15:35:35.567043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.073 [2024-07-15 15:35:35.567478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.073 [2024-07-15 15:35:35.567496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.073 [2024-07-15 15:35:35.567503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.073 [2024-07-15 15:35:35.567719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.073 [2024-07-15 15:35:35.567943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.074 [2024-07-15 15:35:35.567952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.074 [2024-07-15 15:35:35.567959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.074 [2024-07-15 15:35:35.571449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.074 [2024-07-15 15:35:35.580919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.074 [2024-07-15 15:35:35.581367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.074 [2024-07-15 15:35:35.581382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.074 [2024-07-15 15:35:35.581389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.074 [2024-07-15 15:35:35.581604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.074 [2024-07-15 15:35:35.581819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.074 [2024-07-15 15:35:35.581827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.074 [2024-07-15 15:35:35.581833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.074 [2024-07-15 15:35:35.585334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.074 [2024-07-15 15:35:35.594797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.074 [2024-07-15 15:35:35.595351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.074 [2024-07-15 15:35:35.595366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.074 [2024-07-15 15:35:35.595373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.074 [2024-07-15 15:35:35.595588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.074 [2024-07-15 15:35:35.595804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.074 [2024-07-15 15:35:35.595811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.074 [2024-07-15 15:35:35.595818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.074 [2024-07-15 15:35:35.599309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.074 [2024-07-15 15:35:35.608576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.074 [2024-07-15 15:35:35.609092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.074 [2024-07-15 15:35:35.609107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.074 [2024-07-15 15:35:35.609114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.074 [2024-07-15 15:35:35.609329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.074 [2024-07-15 15:35:35.609544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.074 [2024-07-15 15:35:35.609552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.074 [2024-07-15 15:35:35.609559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.074 [2024-07-15 15:35:35.613047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.074 [2024-07-15 15:35:35.622306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.074 [2024-07-15 15:35:35.622876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.074 [2024-07-15 15:35:35.622895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.074 [2024-07-15 15:35:35.622903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.074 [2024-07-15 15:35:35.623118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.074 [2024-07-15 15:35:35.623333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.074 [2024-07-15 15:35:35.623340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.074 [2024-07-15 15:35:35.623347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.074 [2024-07-15 15:35:35.626835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.074 [2024-07-15 15:35:35.636097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.074 [2024-07-15 15:35:35.636658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.074 [2024-07-15 15:35:35.636672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.074 [2024-07-15 15:35:35.636680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.074 [2024-07-15 15:35:35.636903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.074 [2024-07-15 15:35:35.637118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.074 [2024-07-15 15:35:35.637125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.074 [2024-07-15 15:35:35.637132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.074 [2024-07-15 15:35:35.640623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.074 [2024-07-15 15:35:35.649887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.074 [2024-07-15 15:35:35.650419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.074 [2024-07-15 15:35:35.650434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.074 [2024-07-15 15:35:35.650441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.074 [2024-07-15 15:35:35.650656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.074 [2024-07-15 15:35:35.650871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.074 [2024-07-15 15:35:35.650879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.074 [2024-07-15 15:35:35.650890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.074 [2024-07-15 15:35:35.654380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.074 [2024-07-15 15:35:35.663645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.074 [2024-07-15 15:35:35.664190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.074 [2024-07-15 15:35:35.664205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.074 [2024-07-15 15:35:35.664212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.074 [2024-07-15 15:35:35.664427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.074 [2024-07-15 15:35:35.664642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.074 [2024-07-15 15:35:35.664649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.074 [2024-07-15 15:35:35.664656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.074 [2024-07-15 15:35:35.668150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.074 [2024-07-15 15:35:35.677414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.074 [2024-07-15 15:35:35.678138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.074 [2024-07-15 15:35:35.678175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.074 [2024-07-15 15:35:35.678186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.074 [2024-07-15 15:35:35.678423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.074 [2024-07-15 15:35:35.678642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.074 [2024-07-15 15:35:35.678650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.074 [2024-07-15 15:35:35.678662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.074 [2024-07-15 15:35:35.682175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.074 [2024-07-15 15:35:35.691241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.337 [2024-07-15 15:35:35.691896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 15:35:35.691933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.337 [2024-07-15 15:35:35.691945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.337 [2024-07-15 15:35:35.692185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.337 [2024-07-15 15:35:35.692404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.337 [2024-07-15 15:35:35.692413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.337 [2024-07-15 15:35:35.692420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.337 [2024-07-15 15:35:35.695917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.337 [2024-07-15 15:35:35.705004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.337 [2024-07-15 15:35:35.705545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 15:35:35.705582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.337 [2024-07-15 15:35:35.705592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.337 [2024-07-15 15:35:35.705828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.337 [2024-07-15 15:35:35.706056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.337 [2024-07-15 15:35:35.706066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.337 [2024-07-15 15:35:35.706073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.337 [2024-07-15 15:35:35.709567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.337 [2024-07-15 15:35:35.718828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.337 [2024-07-15 15:35:35.719477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 15:35:35.719514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.337 [2024-07-15 15:35:35.719524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.337 [2024-07-15 15:35:35.719760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.337 [2024-07-15 15:35:35.719988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.337 [2024-07-15 15:35:35.719997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.337 [2024-07-15 15:35:35.720005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.337 [2024-07-15 15:35:35.723497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.337 [2024-07-15 15:35:35.732760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.337 [2024-07-15 15:35:35.733435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 15:35:35.733479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.337 [2024-07-15 15:35:35.733490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.337 [2024-07-15 15:35:35.733726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.337 [2024-07-15 15:35:35.733954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.337 [2024-07-15 15:35:35.733963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.337 [2024-07-15 15:35:35.733971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.337 [2024-07-15 15:35:35.737466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.337 [2024-07-15 15:35:35.746527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.337 [2024-07-15 15:35:35.747107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 15:35:35.747144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.337 [2024-07-15 15:35:35.747154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.337 [2024-07-15 15:35:35.747390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.337 [2024-07-15 15:35:35.747610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.337 [2024-07-15 15:35:35.747618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.337 [2024-07-15 15:35:35.747625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.337 [2024-07-15 15:35:35.751128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.337 [2024-07-15 15:35:35.760388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.337 [2024-07-15 15:35:35.760992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 15:35:35.761029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.337 [2024-07-15 15:35:35.761040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.337 [2024-07-15 15:35:35.761280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.337 [2024-07-15 15:35:35.761500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.337 [2024-07-15 15:35:35.761508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.337 [2024-07-15 15:35:35.761515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.337 [2024-07-15 15:35:35.765019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.337 [2024-07-15 15:35:35.774286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.337 [2024-07-15 15:35:35.774915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 15:35:35.774952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.337 [2024-07-15 15:35:35.774963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.337 [2024-07-15 15:35:35.775200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.337 [2024-07-15 15:35:35.775424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.337 [2024-07-15 15:35:35.775432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.337 [2024-07-15 15:35:35.775439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.337 [2024-07-15 15:35:35.778939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.337 [2024-07-15 15:35:35.788219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.337 [2024-07-15 15:35:35.788802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 15:35:35.788820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.337 [2024-07-15 15:35:35.788828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.337 [2024-07-15 15:35:35.789051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.337 [2024-07-15 15:35:35.789267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.337 [2024-07-15 15:35:35.789274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.337 [2024-07-15 15:35:35.789281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.337 [2024-07-15 15:35:35.792767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.337 [2024-07-15 15:35:35.802029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.337 [2024-07-15 15:35:35.802660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 15:35:35.802696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.338 [2024-07-15 15:35:35.802706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.338 [2024-07-15 15:35:35.802950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.338 [2024-07-15 15:35:35.803171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.338 [2024-07-15 15:35:35.803179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.338 [2024-07-15 15:35:35.803186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.338 [2024-07-15 15:35:35.806679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.338 [2024-07-15 15:35:35.815947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.338 [2024-07-15 15:35:35.816601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 15:35:35.816637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.338 [2024-07-15 15:35:35.816647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.338 [2024-07-15 15:35:35.816892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.338 [2024-07-15 15:35:35.817112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.338 [2024-07-15 15:35:35.817121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.338 [2024-07-15 15:35:35.817128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.338 [2024-07-15 15:35:35.820624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.338 [2024-07-15 15:35:35.829682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.338 [2024-07-15 15:35:35.830338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 15:35:35.830375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.338 [2024-07-15 15:35:35.830385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.338 [2024-07-15 15:35:35.830621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.338 [2024-07-15 15:35:35.830841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.338 [2024-07-15 15:35:35.830849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.338 [2024-07-15 15:35:35.830856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.338 [2024-07-15 15:35:35.834356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 893214 Killed "${NVMF_APP[@]}" "$@" 00:30:26.338 15:35:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:26.338 15:35:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:26.338 15:35:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:26.338 15:35:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:26.338 15:35:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:26.338 [2024-07-15 15:35:35.843419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.338 [2024-07-15 15:35:35.844008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 15:35:35.844044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.338 [2024-07-15 15:35:35.844056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.338 [2024-07-15 15:35:35.844293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.338 [2024-07-15 15:35:35.844512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.338 [2024-07-15 15:35:35.844520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.338 [2024-07-15 15:35:35.844527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.338 15:35:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=894914 00:30:26.338 15:35:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 894914 00:30:26.338 15:35:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:26.338 15:35:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 894914 ']' 00:30:26.338 15:35:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.338 15:35:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:26.338 15:35:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.338 15:35:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:26.338 15:35:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:26.338 [2024-07-15 15:35:35.848033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.338 [2024-07-15 15:35:35.857306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.338 [2024-07-15 15:35:35.857946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 15:35:35.857983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.338 [2024-07-15 15:35:35.857995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.338 [2024-07-15 15:35:35.858235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.338 [2024-07-15 15:35:35.858454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.338 [2024-07-15 15:35:35.858462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.338 [2024-07-15 15:35:35.858470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.338 [2024-07-15 15:35:35.861970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.338 [2024-07-15 15:35:35.871241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.338 [2024-07-15 15:35:35.871773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 15:35:35.871791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.338 [2024-07-15 15:35:35.871799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.338 [2024-07-15 15:35:35.872020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.338 [2024-07-15 15:35:35.872237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.338 [2024-07-15 15:35:35.872245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.338 [2024-07-15 15:35:35.872252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.338 [2024-07-15 15:35:35.875741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.338 [2024-07-15 15:35:35.885021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.338 [2024-07-15 15:35:35.885570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 15:35:35.885586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.338 [2024-07-15 15:35:35.885593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.338 [2024-07-15 15:35:35.885809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.338 [2024-07-15 15:35:35.886029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.338 [2024-07-15 15:35:35.886037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.338 [2024-07-15 15:35:35.886045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.338 [2024-07-15 15:35:35.889530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.338 [2024-07-15 15:35:35.898788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.338 [2024-07-15 15:35:35.899147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 15:35:35.899163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.338 [2024-07-15 15:35:35.899175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.338 [2024-07-15 15:35:35.899391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.338 [2024-07-15 15:35:35.899606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.338 [2024-07-15 15:35:35.899614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.338 [2024-07-15 15:35:35.899621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.338 [2024-07-15 15:35:35.903114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.338 [2024-07-15 15:35:35.904610] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:30:26.338 [2024-07-15 15:35:35.904654] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.338 [2024-07-15 15:35:35.912615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.338 [2024-07-15 15:35:35.913316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 15:35:35.913352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.338 [2024-07-15 15:35:35.913363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.338 [2024-07-15 15:35:35.913600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.338 [2024-07-15 15:35:35.913819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.338 [2024-07-15 15:35:35.913827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.338 [2024-07-15 15:35:35.913835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.338 [2024-07-15 15:35:35.917337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.338 [2024-07-15 15:35:35.926397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.338 [2024-07-15 15:35:35.926829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 15:35:35.926847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.338 [2024-07-15 15:35:35.926855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.339 [2024-07-15 15:35:35.927077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.339 [2024-07-15 15:35:35.927294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.339 [2024-07-15 15:35:35.927302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.339 [2024-07-15 15:35:35.927309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.339 [2024-07-15 15:35:35.930796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.339 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.339 [2024-07-15 15:35:35.940130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.339 [2024-07-15 15:35:35.940755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 15:35:35.940792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.339 [2024-07-15 15:35:35.940806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.339 [2024-07-15 15:35:35.941050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.339 [2024-07-15 15:35:35.941273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.339 [2024-07-15 15:35:35.941281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.339 [2024-07-15 15:35:35.941289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.339 [2024-07-15 15:35:35.944782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.339 [2024-07-15 15:35:35.954045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.339 [2024-07-15 15:35:35.954640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 15:35:35.954658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.339 [2024-07-15 15:35:35.954666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.339 [2024-07-15 15:35:35.954891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.339 [2024-07-15 15:35:35.955108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.339 [2024-07-15 15:35:35.955116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.339 [2024-07-15 15:35:35.955123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.635 [2024-07-15 15:35:35.958614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.635 [2024-07-15 15:35:35.967877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.635 [2024-07-15 15:35:35.968426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 15:35:35.968463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.635 [2024-07-15 15:35:35.968475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.635 [2024-07-15 15:35:35.968713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.635 [2024-07-15 15:35:35.968940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.635 [2024-07-15 15:35:35.968949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.635 [2024-07-15 15:35:35.968958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.635 [2024-07-15 15:35:35.972453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.635 [2024-07-15 15:35:35.974035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:26.635 [2024-07-15 15:35:35.981720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.635 [2024-07-15 15:35:35.982406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 15:35:35.982443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.635 [2024-07-15 15:35:35.982456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.635 [2024-07-15 15:35:35.982693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.635 [2024-07-15 15:35:35.982935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.635 [2024-07-15 15:35:35.982949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.635 [2024-07-15 15:35:35.982957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.635 [2024-07-15 15:35:35.986454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.635 [2024-07-15 15:35:35.995519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.635 [2024-07-15 15:35:35.996103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 15:35:35.996140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.635 [2024-07-15 15:35:35.996150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.635 [2024-07-15 15:35:35.996387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.635 [2024-07-15 15:35:35.996606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.635 [2024-07-15 15:35:35.996615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.635 [2024-07-15 15:35:35.996622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.635 [2024-07-15 15:35:36.000125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.635 [2024-07-15 15:35:36.009398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.635 [2024-07-15 15:35:36.009986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 15:35:36.010024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.635 [2024-07-15 15:35:36.010035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.635 [2024-07-15 15:35:36.010276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.635 [2024-07-15 15:35:36.010495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.635 [2024-07-15 15:35:36.010503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.635 [2024-07-15 15:35:36.010511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.635 [2024-07-15 15:35:36.014015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.635 [2024-07-15 15:35:36.023289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.635 [2024-07-15 15:35:36.023753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 15:35:36.023771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.635 [2024-07-15 15:35:36.023779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.635 [2024-07-15 15:35:36.024002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.635 [2024-07-15 15:35:36.024219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.635 [2024-07-15 15:35:36.024227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.635 [2024-07-15 15:35:36.024234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.635 [2024-07-15 15:35:36.027722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.635 [2024-07-15 15:35:36.037199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.635 [2024-07-15 15:35:36.037910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 15:35:36.037946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.635 [2024-07-15 15:35:36.037956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.635 [2024-07-15 15:35:36.038053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.635 [2024-07-15 15:35:36.038077] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.635 [2024-07-15 15:35:36.038084] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.635 [2024-07-15 15:35:36.038090] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.635 [2024-07-15 15:35:36.038096] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.635 [2024-07-15 15:35:36.038193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.635 [2024-07-15 15:35:36.038204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:26.635 [2024-07-15 15:35:36.038363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.635 [2024-07-15 15:35:36.038413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.635 [2024-07-15 15:35:36.038422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.635 [2024-07-15 15:35:36.038430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.635 [2024-07-15 15:35:36.038364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:26.635 [2024-07-15 15:35:36.041931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.635 [2024-07-15 15:35:36.050996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.635 [2024-07-15 15:35:36.051704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 15:35:36.051742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.636 [2024-07-15 15:35:36.051753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.636 [2024-07-15 15:35:36.051997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.636 [2024-07-15 15:35:36.052217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.636 [2024-07-15 15:35:36.052226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.636 [2024-07-15 15:35:36.052233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.636 [2024-07-15 15:35:36.055728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.636 [2024-07-15 15:35:36.064790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.636 [2024-07-15 15:35:36.065476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 15:35:36.065514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.636 [2024-07-15 15:35:36.065524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.636 [2024-07-15 15:35:36.065760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.636 [2024-07-15 15:35:36.065986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.636 [2024-07-15 15:35:36.066000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.636 [2024-07-15 15:35:36.066008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.636 [2024-07-15 15:35:36.069503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.636 [2024-07-15 15:35:36.078566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.636 [2024-07-15 15:35:36.079049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 15:35:36.079086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.636 [2024-07-15 15:35:36.079098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.636 [2024-07-15 15:35:36.079336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.636 [2024-07-15 15:35:36.079555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.636 [2024-07-15 15:35:36.079564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.636 [2024-07-15 15:35:36.079571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.636 [2024-07-15 15:35:36.083094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.636 [2024-07-15 15:35:36.092367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.636 [2024-07-15 15:35:36.092877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 15:35:36.092900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.636 [2024-07-15 15:35:36.092908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.636 [2024-07-15 15:35:36.093124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.636 [2024-07-15 15:35:36.093340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.636 [2024-07-15 15:35:36.093348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.636 [2024-07-15 15:35:36.093355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.636 [2024-07-15 15:35:36.096841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.636 [2024-07-15 15:35:36.106101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.636 [2024-07-15 15:35:36.106689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 15:35:36.106704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.636 [2024-07-15 15:35:36.106711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.636 [2024-07-15 15:35:36.106932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.636 [2024-07-15 15:35:36.107149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.636 [2024-07-15 15:35:36.107157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.636 [2024-07-15 15:35:36.107164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.636 [2024-07-15 15:35:36.110651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.636 [2024-07-15 15:35:36.119967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.636 [2024-07-15 15:35:36.120610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 15:35:36.120646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.636 [2024-07-15 15:35:36.120656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.636 [2024-07-15 15:35:36.120899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.636 [2024-07-15 15:35:36.121120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.636 [2024-07-15 15:35:36.121128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.636 [2024-07-15 15:35:36.121135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.636 [2024-07-15 15:35:36.124630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.636 [2024-07-15 15:35:36.133744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.636 [2024-07-15 15:35:36.134454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 15:35:36.134491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.636 [2024-07-15 15:35:36.134501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.636 [2024-07-15 15:35:36.134737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.636 [2024-07-15 15:35:36.134964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.636 [2024-07-15 15:35:36.134973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.636 [2024-07-15 15:35:36.134980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.636 [2024-07-15 15:35:36.138692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.636 [2024-07-15 15:35:36.147554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.636 [2024-07-15 15:35:36.148195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 15:35:36.148232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.636 [2024-07-15 15:35:36.148242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.636 [2024-07-15 15:35:36.148478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.636 [2024-07-15 15:35:36.148698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.636 [2024-07-15 15:35:36.148706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.636 [2024-07-15 15:35:36.148713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.636 [2024-07-15 15:35:36.152213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.636 [2024-07-15 15:35:36.161479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.636 [2024-07-15 15:35:36.162183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 15:35:36.162220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.636 [2024-07-15 15:35:36.162230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.636 [2024-07-15 15:35:36.162471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.636 [2024-07-15 15:35:36.162690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.636 [2024-07-15 15:35:36.162698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.636 [2024-07-15 15:35:36.162706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.636 [2024-07-15 15:35:36.166207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.636 [2024-07-15 15:35:36.175272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.636 [2024-07-15 15:35:36.175738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 15:35:36.175755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.636 [2024-07-15 15:35:36.175763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.636 [2024-07-15 15:35:36.175984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.636 [2024-07-15 15:35:36.176201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.636 [2024-07-15 15:35:36.176209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.636 [2024-07-15 15:35:36.176216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.636 [2024-07-15 15:35:36.179704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.636 [2024-07-15 15:35:36.189193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.636 [2024-07-15 15:35:36.189770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 15:35:36.189786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.636 [2024-07-15 15:35:36.189793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.636 [2024-07-15 15:35:36.190014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.636 [2024-07-15 15:35:36.190231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.636 [2024-07-15 15:35:36.190238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.636 [2024-07-15 15:35:36.190245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.636 [2024-07-15 15:35:36.193745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.636 [2024-07-15 15:35:36.203006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.636 [2024-07-15 15:35:36.203521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 15:35:36.203558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.637 [2024-07-15 15:35:36.203568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.637 [2024-07-15 15:35:36.203805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.637 [2024-07-15 15:35:36.204032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.637 [2024-07-15 15:35:36.204042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.637 [2024-07-15 15:35:36.204054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.637 [2024-07-15 15:35:36.207546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.637 [2024-07-15 15:35:36.216811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.637 [2024-07-15 15:35:36.217416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 15:35:36.217434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.637 [2024-07-15 15:35:36.217441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.637 [2024-07-15 15:35:36.217658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.637 [2024-07-15 15:35:36.217873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.637 [2024-07-15 15:35:36.217881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.637 [2024-07-15 15:35:36.217892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.637 [2024-07-15 15:35:36.221380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.637 [2024-07-15 15:35:36.230637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.637 [2024-07-15 15:35:36.231186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 15:35:36.231202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.637 [2024-07-15 15:35:36.231209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.637 [2024-07-15 15:35:36.231424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.637 [2024-07-15 15:35:36.231639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.637 [2024-07-15 15:35:36.231647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.637 [2024-07-15 15:35:36.231654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.637 [2024-07-15 15:35:36.235146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.637 [2024-07-15 15:35:36.244405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.637 [2024-07-15 15:35:36.244963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 15:35:36.245000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.637 [2024-07-15 15:35:36.245011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.637 [2024-07-15 15:35:36.245251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.637 [2024-07-15 15:35:36.245471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.637 [2024-07-15 15:35:36.245480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.637 [2024-07-15 15:35:36.245487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.637 [2024-07-15 15:35:36.248990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.899 [2024-07-15 15:35:36.258257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.899 [2024-07-15 15:35:36.258941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-07-15 15:35:36.258978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.899 [2024-07-15 15:35:36.258990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.899 [2024-07-15 15:35:36.259229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.899 [2024-07-15 15:35:36.259449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.899 [2024-07-15 15:35:36.259457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.899 [2024-07-15 15:35:36.259464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.899 [2024-07-15 15:35:36.262965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.899 [2024-07-15 15:35:36.272026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.899 [2024-07-15 15:35:36.272647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-07-15 15:35:36.272684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.899 [2024-07-15 15:35:36.272694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.899 [2024-07-15 15:35:36.272937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.899 [2024-07-15 15:35:36.273157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.899 [2024-07-15 15:35:36.273165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.899 [2024-07-15 15:35:36.273173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.899 [2024-07-15 15:35:36.276667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.899 [2024-07-15 15:35:36.285942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.899 [2024-07-15 15:35:36.286601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.899 [2024-07-15 15:35:36.286637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.899 [2024-07-15 15:35:36.286647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.899 [2024-07-15 15:35:36.286891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.899 [2024-07-15 15:35:36.287111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.899 [2024-07-15 15:35:36.287119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.899 [2024-07-15 15:35:36.287126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.899 [2024-07-15 15:35:36.290617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.899 [2024-07-15 15:35:36.299683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.900 [2024-07-15 15:35:36.300260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-07-15 15:35:36.300278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.900 [2024-07-15 15:35:36.300286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.900 [2024-07-15 15:35:36.300506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.900 [2024-07-15 15:35:36.300722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.900 [2024-07-15 15:35:36.300730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.900 [2024-07-15 15:35:36.300736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.900 [2024-07-15 15:35:36.304227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.900 [2024-07-15 15:35:36.313487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.900 [2024-07-15 15:35:36.313932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-07-15 15:35:36.313951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.900 [2024-07-15 15:35:36.313959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.900 [2024-07-15 15:35:36.314176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.900 [2024-07-15 15:35:36.314392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.900 [2024-07-15 15:35:36.314401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.900 [2024-07-15 15:35:36.314408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.900 [2024-07-15 15:35:36.317901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.900 [2024-07-15 15:35:36.327369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.900 [2024-07-15 15:35:36.328127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-07-15 15:35:36.328164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.900 [2024-07-15 15:35:36.328175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.900 [2024-07-15 15:35:36.328411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.900 [2024-07-15 15:35:36.328630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.900 [2024-07-15 15:35:36.328639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.900 [2024-07-15 15:35:36.328646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.900 [2024-07-15 15:35:36.332151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.900 [2024-07-15 15:35:36.341214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.900 [2024-07-15 15:35:36.341689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-07-15 15:35:36.341706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.900 [2024-07-15 15:35:36.341714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.900 [2024-07-15 15:35:36.341936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.900 [2024-07-15 15:35:36.342152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.900 [2024-07-15 15:35:36.342159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.900 [2024-07-15 15:35:36.342171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.900 [2024-07-15 15:35:36.345659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.900 [2024-07-15 15:35:36.355132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.900 [2024-07-15 15:35:36.355605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-07-15 15:35:36.355621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.900 [2024-07-15 15:35:36.355628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.900 [2024-07-15 15:35:36.355844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.900 [2024-07-15 15:35:36.356064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.900 [2024-07-15 15:35:36.356073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.900 [2024-07-15 15:35:36.356080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.900 [2024-07-15 15:35:36.359567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.900 [2024-07-15 15:35:36.369034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.900 [2024-07-15 15:35:36.369618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-07-15 15:35:36.369632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.900 [2024-07-15 15:35:36.369640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.900 [2024-07-15 15:35:36.369854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.900 [2024-07-15 15:35:36.370075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.900 [2024-07-15 15:35:36.370083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.900 [2024-07-15 15:35:36.370090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.900 [2024-07-15 15:35:36.373576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.900 [2024-07-15 15:35:36.382837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.900 [2024-07-15 15:35:36.383385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-07-15 15:35:36.383401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.900 [2024-07-15 15:35:36.383408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.900 [2024-07-15 15:35:36.383623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.900 [2024-07-15 15:35:36.383839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.900 [2024-07-15 15:35:36.383846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.900 [2024-07-15 15:35:36.383853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.900 [2024-07-15 15:35:36.387343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.900 [2024-07-15 15:35:36.396602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.900 [2024-07-15 15:35:36.397152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-07-15 15:35:36.397174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.900 [2024-07-15 15:35:36.397181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.900 [2024-07-15 15:35:36.397397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.900 [2024-07-15 15:35:36.397612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.900 [2024-07-15 15:35:36.397620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.900 [2024-07-15 15:35:36.397627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.900 [2024-07-15 15:35:36.401124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.900 [2024-07-15 15:35:36.410384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.900 [2024-07-15 15:35:36.411133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-07-15 15:35:36.411170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.900 [2024-07-15 15:35:36.411180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.900 [2024-07-15 15:35:36.411416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.900 [2024-07-15 15:35:36.411635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.900 [2024-07-15 15:35:36.411643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.900 [2024-07-15 15:35:36.411651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.900 [2024-07-15 15:35:36.415150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.900 [2024-07-15 15:35:36.424213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.900 [2024-07-15 15:35:36.424903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.900 [2024-07-15 15:35:36.424939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.900 [2024-07-15 15:35:36.424949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.901 [2024-07-15 15:35:36.425185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.901 [2024-07-15 15:35:36.425405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.901 [2024-07-15 15:35:36.425413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.901 [2024-07-15 15:35:36.425420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.901 [2024-07-15 15:35:36.428917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.901 [2024-07-15 15:35:36.437975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.901 [2024-07-15 15:35:36.438660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-07-15 15:35:36.438697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.901 [2024-07-15 15:35:36.438707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.901 [2024-07-15 15:35:36.438951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.901 [2024-07-15 15:35:36.439175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.901 [2024-07-15 15:35:36.439183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.901 [2024-07-15 15:35:36.439190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.901 [2024-07-15 15:35:36.442683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.901 [2024-07-15 15:35:36.451741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.901 [2024-07-15 15:35:36.452380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-07-15 15:35:36.452417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.901 [2024-07-15 15:35:36.452427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.901 [2024-07-15 15:35:36.452663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.901 [2024-07-15 15:35:36.452891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.901 [2024-07-15 15:35:36.452899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.901 [2024-07-15 15:35:36.452907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.901 [2024-07-15 15:35:36.456401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.901 [2024-07-15 15:35:36.465670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.901 [2024-07-15 15:35:36.466357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-07-15 15:35:36.466393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.901 [2024-07-15 15:35:36.466403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.901 [2024-07-15 15:35:36.466639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.901 [2024-07-15 15:35:36.466858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.901 [2024-07-15 15:35:36.466867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.901 [2024-07-15 15:35:36.466874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.901 [2024-07-15 15:35:36.470375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.901 [2024-07-15 15:35:36.479434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.901 [2024-07-15 15:35:36.480025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-07-15 15:35:36.480043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.901 [2024-07-15 15:35:36.480051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.901 [2024-07-15 15:35:36.480268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.901 [2024-07-15 15:35:36.480483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.901 [2024-07-15 15:35:36.480491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.901 [2024-07-15 15:35:36.480497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.901 [2024-07-15 15:35:36.484006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.901 [2024-07-15 15:35:36.493266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.901 [2024-07-15 15:35:36.493912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-07-15 15:35:36.493949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.901 [2024-07-15 15:35:36.493961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.901 [2024-07-15 15:35:36.494198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.901 [2024-07-15 15:35:36.494417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.901 [2024-07-15 15:35:36.494425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.901 [2024-07-15 15:35:36.494432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.901 [2024-07-15 15:35:36.497932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:26.901 [2024-07-15 15:35:36.507200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:26.901 [2024-07-15 15:35:36.507878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.901 [2024-07-15 15:35:36.507921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:26.901 [2024-07-15 15:35:36.507933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:26.901 [2024-07-15 15:35:36.508169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:26.901 [2024-07-15 15:35:36.508389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:26.901 [2024-07-15 15:35:36.508397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:26.901 [2024-07-15 15:35:36.508404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:26.901 [2024-07-15 15:35:36.511903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.163 [2024-07-15 15:35:36.520965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.163 [2024-07-15 15:35:36.521522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.163 [2024-07-15 15:35:36.521540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.163 [2024-07-15 15:35:36.521548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.163 [2024-07-15 15:35:36.521765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.163 [2024-07-15 15:35:36.521988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.163 [2024-07-15 15:35:36.521997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.163 [2024-07-15 15:35:36.522004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.163 [2024-07-15 15:35:36.525493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.163 [2024-07-15 15:35:36.534751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.163 [2024-07-15 15:35:36.535309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.163 [2024-07-15 15:35:36.535327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.163 [2024-07-15 15:35:36.535339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.163 [2024-07-15 15:35:36.535555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.163 [2024-07-15 15:35:36.535771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.163 [2024-07-15 15:35:36.535779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.163 [2024-07-15 15:35:36.535786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.163 [2024-07-15 15:35:36.539284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.163 [2024-07-15 15:35:36.548553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.163 [2024-07-15 15:35:36.549030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.163 [2024-07-15 15:35:36.549068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.163 [2024-07-15 15:35:36.549079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.163 [2024-07-15 15:35:36.549319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.163 [2024-07-15 15:35:36.549538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.163 [2024-07-15 15:35:36.549546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.163 [2024-07-15 15:35:36.549554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.163 [2024-07-15 15:35:36.553056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.163 [2024-07-15 15:35:36.562322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.163 [2024-07-15 15:35:36.562928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.163 [2024-07-15 15:35:36.562952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.163 [2024-07-15 15:35:36.562960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.163 [2024-07-15 15:35:36.563182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.163 [2024-07-15 15:35:36.563399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.163 [2024-07-15 15:35:36.563412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.163 [2024-07-15 15:35:36.563420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.163 [2024-07-15 15:35:36.566916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.163 [2024-07-15 15:35:36.576175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.164 [2024-07-15 15:35:36.576745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.164 [2024-07-15 15:35:36.576782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.164 [2024-07-15 15:35:36.576792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.164 [2024-07-15 15:35:36.577036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.164 [2024-07-15 15:35:36.577257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.164 [2024-07-15 15:35:36.577270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.164 [2024-07-15 15:35:36.577277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.164 [2024-07-15 15:35:36.580772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.164 [2024-07-15 15:35:36.590048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.164 [2024-07-15 15:35:36.590727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.164 [2024-07-15 15:35:36.590763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.164 [2024-07-15 15:35:36.590774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.164 [2024-07-15 15:35:36.591017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.164 [2024-07-15 15:35:36.591238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.164 [2024-07-15 15:35:36.591246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.164 [2024-07-15 15:35:36.591254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.164 [2024-07-15 15:35:36.594748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.164 [2024-07-15 15:35:36.603805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.164 [2024-07-15 15:35:36.604499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.164 [2024-07-15 15:35:36.604536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.164 [2024-07-15 15:35:36.604549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.164 [2024-07-15 15:35:36.604787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.164 [2024-07-15 15:35:36.605015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.164 [2024-07-15 15:35:36.605024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.164 [2024-07-15 15:35:36.605032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.164 [2024-07-15 15:35:36.608532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.164 [2024-07-15 15:35:36.617592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.164 [2024-07-15 15:35:36.618260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.164 [2024-07-15 15:35:36.618296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.164 [2024-07-15 15:35:36.618307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.164 [2024-07-15 15:35:36.618543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.164 [2024-07-15 15:35:36.618763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.164 [2024-07-15 15:35:36.618772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.164 [2024-07-15 15:35:36.618779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.164 [2024-07-15 15:35:36.622287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.164 [2024-07-15 15:35:36.631364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.164 [2024-07-15 15:35:36.631926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.164 [2024-07-15 15:35:36.631964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.164 [2024-07-15 15:35:36.631976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.164 [2024-07-15 15:35:36.632214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.164 [2024-07-15 15:35:36.632433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.164 [2024-07-15 15:35:36.632442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.164 [2024-07-15 15:35:36.632449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.164 [2024-07-15 15:35:36.635954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.164 [2024-07-15 15:35:36.645223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.164 [2024-07-15 15:35:36.645771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.164 [2024-07-15 15:35:36.645789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.164 [2024-07-15 15:35:36.645797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.164 [2024-07-15 15:35:36.646019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.164 [2024-07-15 15:35:36.646235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.164 [2024-07-15 15:35:36.646243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.164 [2024-07-15 15:35:36.646250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.164 [2024-07-15 15:35:36.649736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.164 [2024-07-15 15:35:36.659008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.164 [2024-07-15 15:35:36.659619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.164 [2024-07-15 15:35:36.659656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.164 [2024-07-15 15:35:36.659667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.164 [2024-07-15 15:35:36.659911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.164 [2024-07-15 15:35:36.660131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.164 [2024-07-15 15:35:36.660139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.164 [2024-07-15 15:35:36.660147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.164 [2024-07-15 15:35:36.663639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.164 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:27.164 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:30:27.164 15:35:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:27.164 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:27.164 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:27.164 [2024-07-15 15:35:36.672916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.164 [2024-07-15 15:35:36.673592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.164 [2024-07-15 15:35:36.673629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.164 [2024-07-15 15:35:36.673639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.164 [2024-07-15 15:35:36.673875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.164 [2024-07-15 15:35:36.674104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.164 [2024-07-15 15:35:36.674112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.164 [2024-07-15 15:35:36.674119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.164 [2024-07-15 15:35:36.677616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.164 [2024-07-15 15:35:36.686693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.164 [2024-07-15 15:35:36.687379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.164 [2024-07-15 15:35:36.687417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.164 [2024-07-15 15:35:36.687427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.164 [2024-07-15 15:35:36.687662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.164 [2024-07-15 15:35:36.687882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.164 [2024-07-15 15:35:36.687900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.164 [2024-07-15 15:35:36.687907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.164 [2024-07-15 15:35:36.691404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.164 [2024-07-15 15:35:36.700460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.164 [2024-07-15 15:35:36.701129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.164 [2024-07-15 15:35:36.701166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.164 [2024-07-15 15:35:36.701176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.164 [2024-07-15 15:35:36.701412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.164 [2024-07-15 15:35:36.701631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.164 [2024-07-15 15:35:36.701640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.164 [2024-07-15 15:35:36.701647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.164 [2024-07-15 15:35:36.705148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.164 15:35:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.164 15:35:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:27.164 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.164 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:27.164 [2024-07-15 15:35:36.713579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.164 [2024-07-15 15:35:36.714210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.164 [2024-07-15 15:35:36.714806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.165 [2024-07-15 15:35:36.714824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.165 [2024-07-15 15:35:36.714832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.165 [2024-07-15 15:35:36.715054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.165 [2024-07-15 15:35:36.715270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.165 [2024-07-15 15:35:36.715278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.165 [2024-07-15 15:35:36.715285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.165 [2024-07-15 15:35:36.718772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:27.165 [2024-07-15 15:35:36.728033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.165 [2024-07-15 15:35:36.728619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.165 [2024-07-15 15:35:36.728634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.165 [2024-07-15 15:35:36.728642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.165 [2024-07-15 15:35:36.728857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.165 [2024-07-15 15:35:36.729077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.165 [2024-07-15 15:35:36.729085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.165 [2024-07-15 15:35:36.729092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.165 [2024-07-15 15:35:36.732580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.165 [2024-07-15 15:35:36.741858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.165 [2024-07-15 15:35:36.742411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.165 [2024-07-15 15:35:36.742427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.165 [2024-07-15 15:35:36.742434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.165 [2024-07-15 15:35:36.742650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.165 [2024-07-15 15:35:36.742865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.165 [2024-07-15 15:35:36.742873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.165 [2024-07-15 15:35:36.742880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.165 Malloc0 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.165 [2024-07-15 15:35:36.746400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:27.165 [2024-07-15 15:35:36.755686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.165 [2024-07-15 15:35:36.756376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.165 [2024-07-15 15:35:36.756413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.165 [2024-07-15 15:35:36.756423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.165 [2024-07-15 15:35:36.756660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.165 [2024-07-15 15:35:36.756879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.165 [2024-07-15 15:35:36.756896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.165 [2024-07-15 15:35:36.756903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:27.165 [2024-07-15 15:35:36.760399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.165 [2024-07-15 15:35:36.769464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.165 [2024-07-15 15:35:36.770137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.165 [2024-07-15 15:35:36.770174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2017120 with addr=10.0.0.2, port=4420 00:30:27.165 [2024-07-15 15:35:36.770184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2017120 is same with the state(5) to be set 00:30:27.165 [2024-07-15 15:35:36.770420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2017120 (9): Bad file descriptor 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:27.165 [2024-07-15 15:35:36.770640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:27.165 [2024-07-15 15:35:36.770648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:27.165 [2024-07-15 15:35:36.770656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:27.165 [2024-07-15 15:35:36.774159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:27.165 [2024-07-15 15:35:36.777229] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.165 15:35:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 893616 00:30:27.426 [2024-07-15 15:35:36.783231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:27.426 [2024-07-15 15:35:36.816215] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:37.445 00:30:37.445 Latency(us) 00:30:37.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.445 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:37.445 Verification LBA range: start 0x0 length 0x4000 00:30:37.445 Nvme1n1 : 15.01 8001.91 31.26 9858.51 0.00 7141.20 781.65 21189.97 00:30:37.445 =================================================================================================================== 00:30:37.445 Total : 8001.91 31.26 9858.51 0.00 7141.20 781.65 21189.97 00:30:37.445 15:35:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:37.445 15:35:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:37.445 15:35:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.445 15:35:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:37.445 15:35:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:37.445 15:35:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:37.446 rmmod nvme_tcp 00:30:37.446 rmmod nvme_fabrics 00:30:37.446 rmmod nvme_keyring 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 894914 ']' 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 894914 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 894914 ']' 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 894914 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 894914 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 894914' 00:30:37.446 killing process with pid 894914 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 894914 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 894914 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:37.446 15:35:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.387 15:35:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:38.387 00:30:38.387 real 0m28.145s 00:30:38.387 user 1m2.999s 00:30:38.387 sys 0m7.265s 00:30:38.387 15:35:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:38.387 15:35:47 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:38.387 ************************************ 00:30:38.387 END TEST nvmf_bdevperf 00:30:38.387 ************************************ 00:30:38.387 15:35:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:38.387 15:35:47 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:38.387 15:35:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:38.387 15:35:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:38.387 15:35:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:38.387 ************************************ 00:30:38.387 START TEST nvmf_target_disconnect 00:30:38.387 ************************************ 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:38.387 * Looking for test storage... 00:30:38.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.387 15:35:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:30:38.388 15:35:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:46.535 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:46.535 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:46.535 Found net devices under 0000:31:00.0: cvl_0_0 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:46.535 Found net devices under 0000:31:00.1: cvl_0_1 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:46.535 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:46.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:46.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:30:46.536 00:30:46.536 --- 10.0.0.2 ping statistics --- 00:30:46.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.536 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:46.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:46.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:30:46.536 00:30:46.536 --- 10.0.0.1 ping statistics --- 00:30:46.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.536 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:46.536 ************************************ 00:30:46.536 START TEST nvmf_target_disconnect_tc1 00:30:46.536 ************************************ 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:46.536 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.536 [2024-07-15 15:35:55.606551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:46.536 [2024-07-15 15:35:55.606606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5d5710 with addr=10.0.0.2, port=4420 00:30:46.536 [2024-07-15 15:35:55.606629] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:46.536 [2024-07-15 15:35:55.606640] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:46.536 [2024-07-15 15:35:55.606647] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:46.536 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:46.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:46.536 Initializing NVMe Controllers 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:46.536 00:30:46.536 real 0m0.116s 00:30:46.536 user 0m0.051s 00:30:46.536 sys 0m0.064s 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:46.536 ************************************ 00:30:46.536 END TEST nvmf_target_disconnect_tc1 00:30:46.536 ************************************ 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:46.536 ************************************ 00:30:46.536 START TEST nvmf_target_disconnect_tc2 00:30:46.536 ************************************ 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=901174 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 901174 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 901174 ']' 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:46.536 15:35:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:46.536 [2024-07-15 15:35:55.761578] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:30:46.536 [2024-07-15 15:35:55.761637] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:46.536 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.536 [2024-07-15 15:35:55.852095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:46.536 [2024-07-15 15:35:55.946690] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:46.536 [2024-07-15 15:35:55.946749] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:46.536 [2024-07-15 15:35:55.946758] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:46.536 [2024-07-15 15:35:55.946764] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:46.536 [2024-07-15 15:35:55.946771] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:46.536 [2024-07-15 15:35:55.946964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:46.536 [2024-07-15 15:35:55.947159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:46.536 [2024-07-15 15:35:55.947324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:46.536 [2024-07-15 15:35:55.947326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:47.110 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:47.110 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:47.110 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:47.110 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:47.110 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:47.110 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:47.110 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:47.110 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:47.111 Malloc0 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:47.111 [2024-07-15 15:35:56.631295] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:47.111 [2024-07-15 15:35:56.659598] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=901375 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:47.111 15:35:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:47.373 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.299 15:35:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 901174 00:30:49.299 15:35:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:49.299 Read completed with error (sct=0, sc=8) 00:30:49.299 starting I/O failed 00:30:49.299 Read completed with error (sct=0, sc=8) 00:30:49.299 starting I/O failed 00:30:49.299 Read completed with error (sct=0, sc=8) 00:30:49.299 starting I/O failed 00:30:49.299 Read completed with error (sct=0, sc=8) 00:30:49.299 starting I/O failed 00:30:49.299 Read completed with error (sct=0, sc=8) 00:30:49.299 starting I/O failed 00:30:49.299 Read completed with error (sct=0, sc=8) 00:30:49.299 starting I/O failed 00:30:49.299 Read completed with error (sct=0, sc=8) 00:30:49.299 starting I/O failed 00:30:49.299 Read completed with error (sct=0, sc=8) 00:30:49.299 starting I/O failed 00:30:49.299 Read completed with error (sct=0, sc=8) 00:30:49.299 starting I/O failed 00:30:49.299 Read completed with error (sct=0, sc=8) 00:30:49.299 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 [2024-07-15 15:35:58.689107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:49.300 [2024-07-15 15:35:58.689510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.689532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.689875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.689891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.689966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.689977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.690311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.690348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.690684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.690696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.690922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.690934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.691349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.691385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.691682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.691699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.692152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.692188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.692528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.692540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.692891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.692903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.693247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.693283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.693601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.693613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.694119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.694156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.694495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.694507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.694853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.694864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.694951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.694962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Write completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 Read completed with error (sct=0, sc=8) 00:30:49.300 starting I/O failed 00:30:49.300 [2024-07-15 15:35:58.695223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:49.300 [2024-07-15 15:35:58.695571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.695586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.695713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.695723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.696167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.300 [2024-07-15 15:35:58.696202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.300 qpair failed and we were unable to recover it. 00:30:49.300 [2024-07-15 15:35:58.696538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.696551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.696895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.696906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.697412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.697447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.697786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.697799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.698267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.698303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.698576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.698588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.698784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.698794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.699168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.699182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.699489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.699499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.699670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.699681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.699975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.699985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.700317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.700327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.700677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.700688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.701040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.701050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.701247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.701258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.701513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.701524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.701747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.701757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.702069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.702080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.702375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.702385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.702682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.702692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.702908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.702918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.703251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.703261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.703612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.703622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.703959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.703969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.704355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.704365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.704569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.704581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.704927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.704938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.705266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.705275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.705607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.705617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.705798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.705807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.705981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.705992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.706288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.706298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.706617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.706627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.706927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.706938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.707250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.707260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.707528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.707537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.707843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.707852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.708175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.708186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.708502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.708512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.708854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.708864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.709171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.301 [2024-07-15 15:35:58.709182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.301 qpair failed and we were unable to recover it. 00:30:49.301 [2024-07-15 15:35:58.709482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.709492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.709807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.709818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.710138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.710148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.710471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.710482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.710783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.710793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.710981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.710992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.711275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.711291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.711465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.711476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.711844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.711854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.712160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.712170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.712521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.712531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.712692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.712702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.712986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.712998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.713299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.713310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.713599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.713609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.713922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.713933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.714223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.714233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.714534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.714544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.714863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.714874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.715190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.715200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.715477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.715488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.715796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.715807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.716096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.716107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.716452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.716462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.716756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.716767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.717002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.717012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.717213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.717224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.717537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.717548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.717857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.717868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.718204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.718215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.718432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.718442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.718712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.718722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.718894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.718911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.719301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.719316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.719618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.719632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.719926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.719941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.720206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.720222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.720503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.720517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.720850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.720864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.721205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.721219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.302 [2024-07-15 15:35:58.721562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.302 [2024-07-15 15:35:58.721576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.302 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.721893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.721908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.722094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.722111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.722473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.722487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.722744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.722758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.723074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.723089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.723426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.723443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.723773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.723787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.723964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.723979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.724263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.724278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.724622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.724642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.724990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.725005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.725211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.725227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.725547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.725561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.725894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.725909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.726240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.726254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.726639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.726653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.726957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.726972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.727298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.727312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.727662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.727676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.727861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.727876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.728301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.728317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.728629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.728643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.728966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.728984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.729315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.729334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.729687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.729705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.730046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.730065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.730391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.730409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.730713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.730731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.731019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.731038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.731394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.731413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.731724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.731743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.732071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.732090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.732397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.732415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.732609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.732630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.732944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.732964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.733287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.733305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.733613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.733632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.733948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.733967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.734308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.734326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.734626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.734644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.303 qpair failed and we were unable to recover it. 00:30:49.303 [2024-07-15 15:35:58.734961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.303 [2024-07-15 15:35:58.734981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.735168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.735188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.735534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.735553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.735909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.735929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.736297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.736316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.736610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.736632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.736991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.737010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.737348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.737366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.737667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.737686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.737890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.737911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.738215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.738233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.738574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.738592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.738955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.738973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.739298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.739317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.739657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.739675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.740024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.740042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.740355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.740373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.740747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.740765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.741066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.741084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.741397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.741417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.741716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.741734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.741965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.741985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.742319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.742344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.742658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.742682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.743065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.743091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.743424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.743449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.743807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.743832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.744128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.744154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.744384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.744409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.744772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.744796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.745183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.745208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.745569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.745594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.745956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.745982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.304 qpair failed and we were unable to recover it. 00:30:49.304 [2024-07-15 15:35:58.746340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.304 [2024-07-15 15:35:58.746365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.746729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.746753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.747088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.747114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.747449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.747474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.747822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.747847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.748220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.748246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.748668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.748693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.749031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.749058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.749412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.749437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.749800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.749824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.750219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.750245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.750575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.750599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.750828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.750861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.751223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.751250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.751606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.751631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.751989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.752015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.752405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.752430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.752676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.752700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.752927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.752955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.753329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.753354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.753582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.753608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.753952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.753978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.754304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.754329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.754699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.754723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.754950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.754977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.755324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.755352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.755743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.755772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.756133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.756162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.756504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.756532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.756867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.756918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.757259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.757288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.757679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.757709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.758037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.758067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.758396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.758424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.758762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.758792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.759035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.759065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.759320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.759347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.759591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.759619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.759974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.760004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.760352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.760388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.760626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.305 [2024-07-15 15:35:58.760654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.305 qpair failed and we were unable to recover it. 00:30:49.305 [2024-07-15 15:35:58.760985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.761014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.761339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.761367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.761736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.761764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.762101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.762131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.762459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.762487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Write completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Write completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Write completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 Read completed with error (sct=0, sc=8) 00:30:49.306 starting I/O failed 00:30:49.306 [2024-07-15 15:35:58.762767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:49.306 [2024-07-15 15:35:58.763286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.763325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.763566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.763578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.763809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.763819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.764158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.764171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.764554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.764564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.764789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.764798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.765189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.765200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.765489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.765499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.765819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.765829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.766210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.766222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.766543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.766553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.766879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.766894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.767272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.767281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.767601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.767611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.767807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.767817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.768127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.768138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.768406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.768416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.768736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.768746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.769059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.769070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.769391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.769401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.769719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.769730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.769815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.769826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.770140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.770151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.770451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.770462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.770759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.306 [2024-07-15 15:35:58.770769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.306 qpair failed and we were unable to recover it. 00:30:49.306 [2024-07-15 15:35:58.771003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.771013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.771358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.771368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.771589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.771599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.771956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.771966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.772274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.772284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.772578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.772587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.772924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.772935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.773146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.773157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.773471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.773481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.773867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.773877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.774194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.774204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.774526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.774537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.774853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.774863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.775082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.775092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.775302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.775311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.775616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.775628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.776100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.776110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.776414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.776423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.776731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.776741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.777044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.777053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.777373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.777383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.777704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.777714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.777938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.777948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.778319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.778329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.778675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.778685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.778911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.778921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.779119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.779129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.779307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.779318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.779546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.779556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.779870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.779880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.780114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.780124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.780406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.780416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.780650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.780660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.780817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.780828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.781147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.781157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.781457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.781468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.781802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.781812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.782057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.782067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.782447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.782457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.782797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.782807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.783124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.783133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.307 [2024-07-15 15:35:58.783470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.307 [2024-07-15 15:35:58.783480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.307 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.783834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.783846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.784061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.784072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.784280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.784289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.784579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.784588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.784911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.784921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.785220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.785230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.785527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.785537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.785874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.785886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.786222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.786232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.786595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.786604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.786943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.786952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.787285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.787294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.787673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.787683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.787907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.787917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.788143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.788153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.788505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.788515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.788842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.788852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.789230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.789240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.789554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.789564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.789703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.789712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.790041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.790051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.790354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.790364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.790713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.790722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.791058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.791067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.791361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.791370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.791698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.791708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.791923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.791937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.792261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.792275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.792588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.792597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.792886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.792896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.793208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.793217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.793402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.793412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.793722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.793731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.794047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.794057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.794349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.308 [2024-07-15 15:35:58.794359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.308 qpair failed and we were unable to recover it. 00:30:49.308 [2024-07-15 15:35:58.794656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.794665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.794962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.794973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.795296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.795306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.795628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.795638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.795997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.796011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.796297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.796308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.796627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.796637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.796928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.796946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.797274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.797283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.797591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.797600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.797914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.797924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.798251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.798260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.798600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.798609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.798961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.798971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.799266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.799275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.799580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.799591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.799773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.799784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.799952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.799965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.800296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.800305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.800614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.800623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.800968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.800978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.801343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.801352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.801671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.801681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.801895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.801905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.802252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.802262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.802600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.802609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.802933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.802944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.803258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.803267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.803607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.803617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.803919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.803930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.804089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.804098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.804275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.804284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.804609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.804618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.804915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.804926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.805216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.805225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.805444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.805454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.805778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.805787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.806021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.806031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.806313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.806322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.806622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.806631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.806989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.806999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.309 qpair failed and we were unable to recover it. 00:30:49.309 [2024-07-15 15:35:58.807194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.309 [2024-07-15 15:35:58.807204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.807494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.807503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.807838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.807847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.808120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.808131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.808350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.808359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.808579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.808589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.808906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.808916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.809215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.809224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.809564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.809574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.809890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.809900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.810211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.810220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.810560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.810569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.810894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.810904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.811284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.811294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.811584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.811593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.811923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.811933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.812255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.812265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.812608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.812617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.812899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.812909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.813231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.813250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.813625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.813635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.813945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.813955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.814278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.814287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.814625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.814635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.814959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.814968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.815301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.815310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.815642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.815652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.815965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.815975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.816228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.816237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.816554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.816563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.816877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.816888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.817197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.817206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.817546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.817555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.817875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.817887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.818238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.818247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.818584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.818595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.818908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.818917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.819222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.819232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.819569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.819578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.819877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.819891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.820221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.820230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.820487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.310 [2024-07-15 15:35:58.820496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.310 qpair failed and we were unable to recover it. 00:30:49.310 [2024-07-15 15:35:58.820759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.820768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.821180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.821189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.821502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.821511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.821810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.821819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.822109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.822121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.822327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.822337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.822667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.822675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.822963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.822973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.823263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.823280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.823592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.823601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.823894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.823908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.824186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.824195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.824528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.824537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.824880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.824893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.825270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.825281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.825422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.825432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.825708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.825717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.826011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.826021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.826337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.826347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.826688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.826697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.827003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.827013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.827320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.827329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.827615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.827624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.827925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.827938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.828326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.828336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.828629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.828640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.828955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.828965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.829296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.829305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.829642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.829651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.829944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.829953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.830267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.830276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.830575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.830585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.830935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.830945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.831278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.831287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.831604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.831614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.831918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.831928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.832221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.832231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.832543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.832552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.832844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.832855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.833147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.833157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.833459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.833469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.311 qpair failed and we were unable to recover it. 00:30:49.311 [2024-07-15 15:35:58.833784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.311 [2024-07-15 15:35:58.833793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.834081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.834090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.834386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.834395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.834729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.834739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.835114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.835124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.835417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.835427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.835733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.835742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.836036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.836046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.836427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.836436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.836749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.836759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.837070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.837080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.837376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.837385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.837726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.837735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.838063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.838074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.838366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.838375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.838763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.838772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.839167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.839176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.839379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.839388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.839765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.839775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.840082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.840093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.840382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.840391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.840731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.840740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.841053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.841062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.841357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.841367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.841695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.841705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.841926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.841936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.842266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.842275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.842609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.842618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.842952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.842962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.843294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.843304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.843636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.843645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.843968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.843986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.844314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.844323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.844611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.844621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.844934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.844944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.312 [2024-07-15 15:35:58.845261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.312 [2024-07-15 15:35:58.845271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.312 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.845622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.845631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.845950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.845960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.846341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.846350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.846647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.846657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.846962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.846981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.847323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.847332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.847634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.847643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.847958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.847968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.848335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.848343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.848642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.848652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.848976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.848985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.849366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.849375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.849666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.849675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.849996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.850006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.850297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.850307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.850631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.850641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.850939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.850948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.851267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.851276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.851567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.851576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.851975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.851985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.852345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.852355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.852547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.852557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.852939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.852951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.853328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.853337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.853506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.853516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.853797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.853806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.854105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.854115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.854424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.854433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.854729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.854738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.855061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.855071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.855402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.855412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.855802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.855811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.856098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.856107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.856421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.856430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.856747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.856757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.857115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.857125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.857465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.857475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.857793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.857802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.858178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.858188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.858402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.858411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.858714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.313 [2024-07-15 15:35:58.858724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.313 qpair failed and we were unable to recover it. 00:30:49.313 [2024-07-15 15:35:58.859013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.859023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.859339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.859348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.859651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.859660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.859956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.859966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.860267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.860277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.860598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.860607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.860981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.860991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.861291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.861300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.861714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.861725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.861876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.861890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.862192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.862201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.862523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.862532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.862843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.862852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.863145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.863155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.863454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.863463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.863744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.863754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.864049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.864059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.864377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.864387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.864657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.864666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.864989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.864999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.865308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.865317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.865491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.865502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.865821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.865831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.866172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.866183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.866521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.866531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.866840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.866850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.867173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.867183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.867516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.867525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.867871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.867880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.868177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.868187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.868524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.868533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.868873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.868882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.869172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.869181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.869469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.869478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.869796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.869805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.870121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.870131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.870467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.870477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.870768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.870778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.871063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.871073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.871239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.871250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.871559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.871568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.314 qpair failed and we were unable to recover it. 00:30:49.314 [2024-07-15 15:35:58.871881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.314 [2024-07-15 15:35:58.871894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.872228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.872238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.872534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.872543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.872855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.872864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.873205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.873215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.873380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.873391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.873717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.873726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.874115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.874125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.874444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.874454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.874766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.874775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.874914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.874924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.875246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.875255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.875572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.875581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.875920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.875929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.876312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.876321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.876645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.876654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.876968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.876977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.877313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.877323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.877636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.877645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.877924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.877934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.878257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.878267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.878555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.878565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.878880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.878898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.879115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.879124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.879323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.879333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.879640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.879649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.879987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.880002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.880396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.880405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.880707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.880717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.881049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.881058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.881379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.881389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.881585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.881594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.881902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.881914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.882224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.882234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.882565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.882575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.882917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.882929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.883246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.883262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.883589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.883598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.883887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.883897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.884214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.884223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.884521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.884530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.315 [2024-07-15 15:35:58.884729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.315 [2024-07-15 15:35:58.884738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.315 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.885043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.885053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.885357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.885367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.885679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.885688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.885911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.885921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.886274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.886283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.886630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.886639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.886952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.886962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.887268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.887277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.887489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.887499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.887816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.887826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.888127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.888138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.888462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.888471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.888690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.888699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.888933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.888942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.889216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.889225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.889515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.889524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.889858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.889868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.890178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.890188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.890478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.890488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.890714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.890723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.891041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.891053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.891380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.891389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.891709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.891719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.892054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.892063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.892397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.892407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.892773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.892782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.893106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.893116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.893463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.893473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.893848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.893858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.894122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.894132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.894353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.894362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.894677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.894686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.894872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.894894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.895217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.895227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.895538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.895547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.895910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.895920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.896218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.896228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.896540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.896549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.896904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.896914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.897190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.897199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.897491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.316 [2024-07-15 15:35:58.897500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.316 qpair failed and we were unable to recover it. 00:30:49.316 [2024-07-15 15:35:58.897820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.897829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.898040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.898050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.898377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.898386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.898681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.898690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.898991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.899008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.899324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.899333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.899631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.899640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.899879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.899894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.900078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.900087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.900314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.900324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.900609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.900618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.900956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.900966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.901321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.901330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.901572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.901581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.901787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.901797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.902128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.902138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.902441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.902450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.902761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.902770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.903107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.903118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.903494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.903503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.903668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.903678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.903920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.903930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.904246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.904255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.904595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.904604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.904785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.904794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.905083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.905093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.905431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.905440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.905746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.905756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.906071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.906080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.906390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.906399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.906759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.906768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.907075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.907084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.907306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.907315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.907649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.907658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.907957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.907966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.317 [2024-07-15 15:35:58.908321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.317 [2024-07-15 15:35:58.908330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.317 qpair failed and we were unable to recover it. 00:30:49.318 [2024-07-15 15:35:58.908513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-07-15 15:35:58.908523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.318 qpair failed and we were unable to recover it. 00:30:49.318 [2024-07-15 15:35:58.908802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.318 [2024-07-15 15:35:58.908811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.318 qpair failed and we were unable to recover it. 00:30:49.593 [2024-07-15 15:35:58.909113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.593 [2024-07-15 15:35:58.909124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.593 qpair failed and we were unable to recover it. 00:30:49.593 [2024-07-15 15:35:58.909503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.593 [2024-07-15 15:35:58.909513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.593 qpair failed and we were unable to recover it. 00:30:49.593 [2024-07-15 15:35:58.909800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.593 [2024-07-15 15:35:58.909810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.593 qpair failed and we were unable to recover it. 00:30:49.593 [2024-07-15 15:35:58.910142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.593 [2024-07-15 15:35:58.910151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.593 qpair failed and we were unable to recover it. 00:30:49.593 [2024-07-15 15:35:58.910442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.593 [2024-07-15 15:35:58.910452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.593 qpair failed and we were unable to recover it. 00:30:49.593 [2024-07-15 15:35:58.910770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.593 [2024-07-15 15:35:58.910779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.593 qpair failed and we were unable to recover it. 00:30:49.593 [2024-07-15 15:35:58.911060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.593 [2024-07-15 15:35:58.911071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.593 qpair failed and we were unable to recover it. 00:30:49.593 [2024-07-15 15:35:58.911382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.593 [2024-07-15 15:35:58.911391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.593 qpair failed and we were unable to recover it. 00:30:49.593 [2024-07-15 15:35:58.911584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.593 [2024-07-15 15:35:58.911594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.593 qpair failed and we were unable to recover it. 00:30:49.593 [2024-07-15 15:35:58.911933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.593 [2024-07-15 15:35:58.911945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.593 qpair failed and we were unable to recover it. 00:30:49.593 [2024-07-15 15:35:58.912270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.593 [2024-07-15 15:35:58.912279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.593 qpair failed and we were unable to recover it. 00:30:49.593 [2024-07-15 15:35:58.912636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.593 [2024-07-15 15:35:58.912645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.593 qpair failed and we were unable to recover it. 00:30:49.593 [2024-07-15 15:35:58.912987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.593 [2024-07-15 15:35:58.912996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.913313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.913322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.913633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.913642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.913949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.913959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.914144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.914153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.914467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.914476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.914794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.914804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.915027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.915037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.915378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.915387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.915726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.915736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.916068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.916078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.916409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.916418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.916757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.916768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.917085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.917095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.917463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.917472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.917784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.917794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.918101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.918111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.918331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.918341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.918653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.918662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.918971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.918982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.919312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.919321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.919660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.919670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.920001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.920011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.920302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.920311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.920498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.920511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.920812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.920821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.920922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.920932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.921254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.921263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.921473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.921482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.921817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.921826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.922095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.922105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.922301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.922310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.922648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.922657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.922956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.922966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.923276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.923285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.923683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.923692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.924000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.924009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.924330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.924339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.924657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.924666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.925038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.925049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.925359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.594 [2024-07-15 15:35:58.925368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.594 qpair failed and we were unable to recover it. 00:30:49.594 [2024-07-15 15:35:58.925694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.925703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.926008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.926018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.926283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.926292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.926589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.926598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.926897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.926912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.927220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.927229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.927549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.927558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.927847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.927856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.928080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.928090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.928398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.928407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.928735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.928746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.929084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.929093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.929443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.929453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.929764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.929774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.929984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.929993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.930320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.930329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.930615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.930624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.930910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.930920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.931202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.931212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.931543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.931552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.931865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.931874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.932281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.932291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.932582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.932592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.932934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.932943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.933285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.933295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.933611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.933620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.933894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.933903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.934202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.934211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.934584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.934594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.934921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.934931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.935147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.935156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.935498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.935507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.935888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.935897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.936221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.936231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.936521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.936530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.936876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.936889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.937274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.937283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.937617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.937626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.937925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.937935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.938220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.938229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.938520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.938530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.595 [2024-07-15 15:35:58.938832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.595 [2024-07-15 15:35:58.938842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.595 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.939039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.939050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.939396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.939405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.939793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.939803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.940110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.940120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.940440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.940450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.940790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.940800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.941043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.941053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.941386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.941396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.941734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.941744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.942087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.942098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.942308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.942317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.942618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.942627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.942934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.942944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.943245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.943254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.943503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.943512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.943839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.943848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.944150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.944160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.944514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.944523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.944843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.944852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.945195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.945204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.945449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.945458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.945771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.945781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.946143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.946153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.946489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.946499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.946807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.946817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.947138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.947148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.947254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.947264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.947561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.947570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.947879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.947898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.948260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.948269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.948495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.948503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.948761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.948771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.949059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.949068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.949433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.949442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.949732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.949741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.950056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.950065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.950389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.950401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.950573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.950583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.950896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.950911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.951228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.951237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.596 [2024-07-15 15:35:58.951466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.596 [2024-07-15 15:35:58.951475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.596 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.951819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.951829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.952147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.952158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.952464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.952474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.952660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.952669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.952994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.953004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.953187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.953197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.953544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.953553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.953851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.953861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.954185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.954195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.954519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.954528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.954747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.954756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.954953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.954963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.955341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.955350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.955638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.955647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.955972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.955982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.956305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.956315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.956619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.956628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.956867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.956876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.957199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.957209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.957485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.957494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.957691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.957700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.957998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.958007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.958311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.958322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.958637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.958646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.958878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.958894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.959065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.959075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.959377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.959386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.959705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.959715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.960037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.960046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.960336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.960346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.960716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.960726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.961039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.961048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.961380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.961390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.961570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.961580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.961895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.597 [2024-07-15 15:35:58.961905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.597 qpair failed and we were unable to recover it. 00:30:49.597 [2024-07-15 15:35:58.962203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.962212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.962535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.962545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.962854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.962863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.963133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.963143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.963460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.963468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.963838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.963847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.964149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.964158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.964381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.964390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.964666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.964675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.965012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.965022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.965221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.965231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.965545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.965554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.965868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.965877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.965932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.965942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.966293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.966302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.966528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.966537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.966837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.966846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.967124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.967134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.967471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.967480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.967831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.967841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.968149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.968158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.968483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.968492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.968811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.968820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.969110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.969120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.969453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.969462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.969785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.969794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.970176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.970185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.970525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.970534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.970851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.970861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.971014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.971025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.971386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.971395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.971701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.971711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.972041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.972051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.972381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.972390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.972731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.972741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.973072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.973081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.973398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.973408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.973634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.973643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.973957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.598 [2024-07-15 15:35:58.973967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.598 qpair failed and we were unable to recover it. 00:30:49.598 [2024-07-15 15:35:58.974284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.974293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.974584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.974601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.974924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.974935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.975242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.975252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.975474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.975483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.975687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.975696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.976019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.976028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.976381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.976390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.976691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.976700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.977022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.977032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.977347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.977357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.977595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.977604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.977943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.977953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.978266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.978275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.978596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.978605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.978951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.978961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.979133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.979145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.979463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.979472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.979838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.979847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.980186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.980196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.980516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.980526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.980841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.980850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.981194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.981204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.981411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.981420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.981771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.981781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.982106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.982115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.982437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.982447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.982751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.982761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.983115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.983126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.983351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.983361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.983571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.983581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.983792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.983802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.984115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.984125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.984453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.984463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.984757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.984767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.985084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.985094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.985382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.985391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.985682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.985692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.986036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.986045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.986351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.986361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.986574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.986583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.599 qpair failed and we were unable to recover it. 00:30:49.599 [2024-07-15 15:35:58.986797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.599 [2024-07-15 15:35:58.986806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.987102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.987112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.987432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.987443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.987671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.987680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.987990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.988000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.988208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.988217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.988557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.988566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.988875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.988926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.989250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.989259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.989615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.989624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.989938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.989947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.990246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.990256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.990560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.990569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.990896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.990911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.991198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.991208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.991447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.991455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.991636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.991645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.991945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.991955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.992280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.992289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.992598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.992607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.992926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.992935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.993006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.993015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.993192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.993201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.993519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.993528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.993839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.993849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.994171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.994180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.994502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.994511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.994703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.994712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.995015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.995025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.995343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.995356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.995671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.995680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.995917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.995927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.996208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.996217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.996545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.996554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.996873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.996881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.997261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.997271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.997581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.997590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.997808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.997818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.998119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.998128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.998468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.998478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.998865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.998874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.600 [2024-07-15 15:35:58.999127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.600 [2024-07-15 15:35:58.999137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.600 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:58.999519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:58.999528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:58.999825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:58.999834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.000014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.000023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.000303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.000312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.000605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.000614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.000915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.000925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.001275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.001284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.001591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.001601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.001915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.001925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.002109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.002119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.002466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.002475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.002687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.002696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.002981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.002993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.003198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.003208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.003522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.003531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.003863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.003872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.004043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.004054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.004365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.004374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.004611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.004620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.004945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.004954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.005282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.005291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.005606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.005615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.006010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.006020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.006219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.006228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.006556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.006565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.006767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.006777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.007172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.007184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.007494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.007503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.007810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.007822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.008141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.008151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.008494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.008503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.008839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.008849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.009162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.009172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.009472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.009481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.009702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.009711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.010034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.010044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.010249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.010259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.010580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.010589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.010774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.010784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.011070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.011081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.011271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.601 [2024-07-15 15:35:59.011281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.601 qpair failed and we were unable to recover it. 00:30:49.601 [2024-07-15 15:35:59.011489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.011498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.011827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.011837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.012165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.012175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.012437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.012447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.012783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.012793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.013104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.013114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.013314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.013324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.013621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.013631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.013846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.013855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.014177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.014187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.014522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.014532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.014847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.014857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.015175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.015186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.015498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.015508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.015689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.015701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.015992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.016002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.016300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.016310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.016608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.016618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.016804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.016814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.017090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.017100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.017412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.017422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.017759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.017769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.017952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.017962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.018282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.018292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.018626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.018635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.018859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.018869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.019173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.019183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.019504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.019514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.019800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.019809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.020179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.020188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.020510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.020519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.020812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.020821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.021101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.021111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.021401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.021410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.021805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.602 [2024-07-15 15:35:59.021814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.602 qpair failed and we were unable to recover it. 00:30:49.602 [2024-07-15 15:35:59.022108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.022117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.022417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.022426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.022648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.022657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.022964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.022975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.023314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.023323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.023516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.023527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.023741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.023753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.024074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.024083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.024405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.024414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.024728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.024737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.025030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.025040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.025331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.025341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.025620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.025629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.025951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.025968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.026277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.026286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.026610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.026620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.026930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.026940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.027321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.027330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.027636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.027645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.027922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.027932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.028234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.028244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.028575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.028584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.028921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.028931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.029244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.029254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.029543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.029553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.029892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.029902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.030194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.030203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.030512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.030521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.030764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.030773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.030960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.030970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.031300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.031310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.031631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.031640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.031960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.031969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.032123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.032132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.032490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.032499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.032809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.032818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.033129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.033138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.033428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.033438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.033636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.033646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.034021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.034030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.034380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.603 [2024-07-15 15:35:59.034389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.603 qpair failed and we were unable to recover it. 00:30:49.603 [2024-07-15 15:35:59.034742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.034751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.035077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.035087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.035397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.035407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.035716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.035725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.036019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.036037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.036366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.036375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.036712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.036722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.037035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.037045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.037245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.037254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.037584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.037593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.037909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.037918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.038242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.038251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.038589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.038598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.038904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.038916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.039222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.039231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.039545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.039554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.039777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.039786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.040194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.040204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.040423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.040432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.040635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.040644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.040964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.040974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.041303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.041313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.041622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.041631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.041950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.041959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.042264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.042273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.042659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.042668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.042988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.042998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.043312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.043321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.043620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.043630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.043969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.043979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.044289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.044298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.044610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.044619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.044957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.044967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.045314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.045325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.045636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.045645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.045935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.045944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.046264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.046272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.046491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.046500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.046860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.046869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.047228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.047237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.047617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.047626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.604 qpair failed and we were unable to recover it. 00:30:49.604 [2024-07-15 15:35:59.047930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.604 [2024-07-15 15:35:59.047939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.048266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.048274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.048595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.048604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.048944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.048954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.049264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.049273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.049646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.049655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.049954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.049964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.050279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.050288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.050650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.050660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.050976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.050986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.051202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.051211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.051559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.051569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.051915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.051925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.052258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.052268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.052603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.052612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.052955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.052965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.053305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.053315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.053610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.053620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.053953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.053963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.054299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.054312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.054546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.054556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.054736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.054746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.054953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.054964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.055280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.055290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.055632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.055642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.055971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.055981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.056320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.056330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.056670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.056680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.056873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.056887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.057266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.057276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.057461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.057472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.057782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.057793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.058074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.058084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.058385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.058395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.058713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.058723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.059062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.059073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.059412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.059422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.059608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.059617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.059905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.059915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.060234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.060244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.060551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.060561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.060877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.605 [2024-07-15 15:35:59.060891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.605 qpair failed and we were unable to recover it. 00:30:49.605 [2024-07-15 15:35:59.061223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.061233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.061444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.061454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.061734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.061744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.062061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.062071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.062390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.062401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.062708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.062718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.063082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.063093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.063466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.063476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.063660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.063670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.063970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.063980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.064293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.064303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.064617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.064627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.064958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.064968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.065307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.065317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.065658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.065668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.065984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.065994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.066326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.066336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.066558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.066568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.066893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.066907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.067145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.067155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.067467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.067477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.067783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.067793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.068045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.068055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.068370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.068380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.068577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.068586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.068951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.068961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.069186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.069195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.069547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.069556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.069898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.069908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.070304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.070314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.070678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.070687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.070858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.070867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.071190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.071200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.071514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.071523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.606 qpair failed and we were unable to recover it. 00:30:49.606 [2024-07-15 15:35:59.071844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.606 [2024-07-15 15:35:59.071853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.072189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.072198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.072529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.072538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.072862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.072871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.073242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.073252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.073589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.073598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.073915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.073925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.074268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.074277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.074673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.074682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.075029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.075039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.075355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.075364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.075675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.075687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.076062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.076071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.076384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.076393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.076708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.076717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.077034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.077043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.077383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.077392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.077699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.077708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.078032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.078042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.078372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.078382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.078672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.078681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.079002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.079012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.079204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.079214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.079536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.079546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.079837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.079846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.080186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.080195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.080457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.080466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.080799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.080808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.081169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.081179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.081468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.081477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.081789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.081798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.082155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.082165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.082326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.082336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.082673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.082682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.083014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.083024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.083316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.083325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.083619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.083628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.083932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.083941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.084256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.084267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.084609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.084619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.084975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.084985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.607 [2024-07-15 15:35:59.085291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.607 [2024-07-15 15:35:59.085301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.607 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.085615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.085624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.085960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.085970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.086298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.086307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.086632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.086642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.086949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.086959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.087300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.087310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.087645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.087654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.087822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.087831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.088189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.088199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.088494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.088503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.088794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.088803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.089185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.089195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.089488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.089497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.089834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.089843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.090127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.090137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.090449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.090458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.090649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.090658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.090935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.090946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.091168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.091178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.091592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.091602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.091912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.091922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.092257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.092266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.092631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.092640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.092890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.092902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.093214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.093223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.093523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.093532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.093713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.093722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.093961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.093971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.094305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.094314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.094646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.094655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.095000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.095010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.095240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.095250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.095569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.095579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.095986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.095995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.096299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.096308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.096620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.096629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.096933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.096943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.097301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.097311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.097570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.097579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.097908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.097918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.098220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.608 [2024-07-15 15:35:59.098230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.608 qpair failed and we were unable to recover it. 00:30:49.608 [2024-07-15 15:35:59.098521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.098531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.098879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.098895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.099211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.099220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.099499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.099509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.099843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.099852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.100203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.100212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.100545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.100554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.100866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.100875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.101244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.101253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.101568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.101577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.101855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.101864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.102197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.102207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.102547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.102557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.102880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.102901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.103221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.103231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.103552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.103562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.103925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.103935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.104231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.104241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.104595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.104604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.104911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.104921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.105253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.105263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.105641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.105650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.105931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.105940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.106132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.106141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.106460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.106469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.106620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.106629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.106918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.106928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.107242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.107251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.107631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.107640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.107948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.107958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.108271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.108280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.108674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.108684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.109043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.109053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.109368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.109378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.109710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.109719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.110019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.110029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.110352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.110361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.110556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.110566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.110860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.110869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.111162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.111172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.111461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.111470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.609 qpair failed and we were unable to recover it. 00:30:49.609 [2024-07-15 15:35:59.111808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.609 [2024-07-15 15:35:59.111818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.112152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.112162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.112524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.112533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.112842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.112851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.113193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.113203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.113523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.113533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.113828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.113838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.114160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.114170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.114462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.114471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.114780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.114792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.115125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.115135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.115477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.115487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.115825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.115835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.116149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.116159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.116472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.116482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.116692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.116702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.117025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.117035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.117333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.117343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.117677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.117686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.118012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.118023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.118246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.118254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.118665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.118674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.119007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.119017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.119326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.119336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.119522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.119531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.119760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.119775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.120088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.120097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.120422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.120431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.120767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.120776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.121180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.121190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.121517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.121526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.121875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.121887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.122162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.122172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.122502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.122511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.122824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.122833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.610 qpair failed and we were unable to recover it. 00:30:49.610 [2024-07-15 15:35:59.123186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.610 [2024-07-15 15:35:59.123196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.123569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.123582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.123913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.123923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.124145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.124154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.124482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.124492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.124833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.124842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.125154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.125163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.125483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.125493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.125794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.125803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.126129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.126138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.126483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.126493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.126794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.126803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.127001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.127013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.127385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.127394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.127697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.127707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.128103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.128113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.128417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.128427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.128718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.128727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.129033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.129051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.129328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.129337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.129607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.129616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.129899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.129909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.130228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.130238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.130427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.130437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.130654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.130664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.131018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.131028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.131361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.131370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.131692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.131701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.132041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.132050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.132388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.132398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.132779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.132788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.133111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.133122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.133473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.133482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.133816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.133826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.134139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.134149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.134396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.134405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.134704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.134714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.135005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.135016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.135303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.135312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.135676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.135688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.135978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.611 [2024-07-15 15:35:59.135990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.611 qpair failed and we were unable to recover it. 00:30:49.611 [2024-07-15 15:35:59.136332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.136341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.136664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.136673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.137070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.137080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.137279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.137289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.137623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.137632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.137948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.137957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.138271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.138281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.138616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.138626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.138999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.139010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.139302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.139311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.139632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.139641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.139975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.139985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.140325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.140335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.140663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.140672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.140962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.140972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.141271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.141280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.141571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.141581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.141890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.141900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.142231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.142241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.142581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.142590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.142892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.142906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.143226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.143235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.143519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.143529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.143862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.143871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.144244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.144254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.144568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.144577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.144894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.144904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.145235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.145244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.145584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.145596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.145911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.145921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.146230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.146239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.146535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.146544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.146882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.146898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.147227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.147237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.147541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.147550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.147911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.147921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.148233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.148243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.148463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.148473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.148794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.148803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.149025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.149034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.149237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.149246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.612 qpair failed and we were unable to recover it. 00:30:49.612 [2024-07-15 15:35:59.149563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.612 [2024-07-15 15:35:59.149572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.149888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.149898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.150186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.150194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.150413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.150422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.150746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.150756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.150935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.150946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.151272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.151281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.151565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.151575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.151942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.151951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.152268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.152277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.152561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.152571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.152911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.152921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.153243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.153251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.153566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.153575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.153909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.153921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.154185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.154194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.154531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.154540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.154853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.154870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.155203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.155213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.155455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.155464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.155853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.155862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.156174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.156183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.156517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.156526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.156848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.156857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.157178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.157188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.157475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.157484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.157819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.157829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.158117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.158126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.158450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.158459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.158640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.158650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.159064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.159074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.159369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.159379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.159711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.159720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.160037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.160054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.160381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.160390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.160726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.160736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.161069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.161079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.161401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.161411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.161751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.161760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.162094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.162103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.162411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.162420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.162749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.613 [2024-07-15 15:35:59.162761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.613 qpair failed and we were unable to recover it. 00:30:49.613 [2024-07-15 15:35:59.163053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.163070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.163361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.163370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.163660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.163670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.163864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.163874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.164111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.164120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.164426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.164436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.164746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.164755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.164976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.164986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.165297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.165306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.165597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.165607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.165913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.165922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.166170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.166179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.166461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.166471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.166768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.166778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.167071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.167081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.167424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.167434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.167742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.167752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.168059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.168068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.168350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.168360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.168692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.168701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.168995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.169006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.169240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.169250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.169585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.169594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.169922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.169932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.170203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.170212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.170502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.170511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.170823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.170833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.170997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.171007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.171230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.171240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.171535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.171545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.171877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.171889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.172255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.172264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.172580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.172589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.172901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.172914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.173233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.173242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.173572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.173581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.173881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.173893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.174220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.174229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.174526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.174535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.174892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.174901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.175236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.175246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.614 qpair failed and we were unable to recover it. 00:30:49.614 [2024-07-15 15:35:59.175555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.614 [2024-07-15 15:35:59.175564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.175865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.175875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.176206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.176216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.176517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.176527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.176857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.176867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.177185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.177195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.177526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.177536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.177879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.177899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.178229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.178238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.178557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.178567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.178877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.178889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.179216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.179225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.179413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.179424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.179719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.179729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.180052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.180062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.180401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.180411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.180716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.180725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.181030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.181040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.181354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.181363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.181696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.181705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.181999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.182008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.182331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.182340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.182634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.182644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.182984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.182994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.183381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.183390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.183719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.183728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.184059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.184071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.184406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.184416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.184785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.184794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.185091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.185101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.185421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.185430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.185722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.185731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.186051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.186060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.186371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.186381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.186691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.186700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.187036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.615 [2024-07-15 15:35:59.187046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.615 qpair failed and we were unable to recover it. 00:30:49.615 [2024-07-15 15:35:59.187354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.187363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.187681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.187691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.188018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.188027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.188411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.188420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.188745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.188755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.189066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.189076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.189469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.189484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.189762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.189771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.190079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.190089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.190380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.190390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.190506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.190515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.190727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.190736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.191097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.191107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.191311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.191320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.191629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.191638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.191965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.191974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.192322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.192331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.192648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.192659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.192947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.192957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.193251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.193261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.193553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.193562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.193987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.193998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.194205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.194214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.194387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.194396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.194742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.194752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.195105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.195114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.195444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.195454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.195658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.195667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.195852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.195861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.196169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.196179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.196576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.196585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.196911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.196921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.197236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.197246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.197541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.197551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.197881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.197895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.616 [2024-07-15 15:35:59.198105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.616 [2024-07-15 15:35:59.198115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.616 qpair failed and we were unable to recover it. 00:30:49.890 [2024-07-15 15:35:59.198416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.890 [2024-07-15 15:35:59.198427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.890 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.198620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.198629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.198842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.198851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.199205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.199215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.199589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.199598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.199828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.199837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.200145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.200155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.200496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.200505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.200842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.200851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.201151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.201162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.201308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.201318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.201630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.201639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.201983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.201994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.202309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.202318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.202649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.202658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.202977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.202987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.203370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.203379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.203562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.203572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.203902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.203913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.204151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.204160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.204473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.204483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.204826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.204835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.205131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.205141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.205459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.205468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.205641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.205651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.205880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.205893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.206092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.206101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.206427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.206436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.206579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.206588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.206894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.206904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.207224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.207233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.207451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.207460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.207761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.207771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.208127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.208136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.208437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.208447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.208767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.208776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.209108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.209118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.209417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.209426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.209742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.209751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.210068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.210077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.210383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.210393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.210745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.891 [2024-07-15 15:35:59.210754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.891 qpair failed and we were unable to recover it. 00:30:49.891 [2024-07-15 15:35:59.211110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.211120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.211434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.211443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.211655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.211664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.211970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.211979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.212305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.212314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.212633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.212643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.212994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.213004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.213106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.213117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.213426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.213435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.213764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.213773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.214105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.214115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.214443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.214452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.214657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.214666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.214952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.214963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.215271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.215280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.215566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.215575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.215902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.215912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.216243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.216252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.216465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.216475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.216685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.216694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.216994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.217004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.217226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.217236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.217573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.217582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.217781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.217790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.218111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.218121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.218510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.218520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.218723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.218732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.219036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.219046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.219377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.219386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.219701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.219710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.220002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.220012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.220241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.220250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.220553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.220562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.220874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.220895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.221220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.221232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.221572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.221582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.221886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.221897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.222101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.222111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.222456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.222465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.222711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.222721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.892 [2024-07-15 15:35:59.223120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.892 [2024-07-15 15:35:59.223130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.892 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.223334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.223343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.223638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.223655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.223989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.223999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.224325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.224334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.224536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.224545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.224736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.224745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.225187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.225197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.225502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.225512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.225721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.225731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.226000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.226010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.226333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.226342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.226578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.226587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.226960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.226970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.227170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.227180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.227428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.227438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.227751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.227760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.228076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.228085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.228454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.228463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.228763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.228773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.229080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.229090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.229425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.229436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.229729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.229738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.230050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.230060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.230279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.230288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.230596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.230606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.230908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.230919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.231224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.231234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.231430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.231439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.231757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.231766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.232105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.232114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.232479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.232488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.232549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.232558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.232797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.232807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.233020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.233029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.233343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.233352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.233664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.233674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.233995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.234004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.234298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.234308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.234641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.234650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.234942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.234952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.235275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.893 [2024-07-15 15:35:59.235284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.893 qpair failed and we were unable to recover it. 00:30:49.893 [2024-07-15 15:35:59.235618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.235627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.235944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.235954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.236280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.236289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.236588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.236598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.236811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.236820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.237224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.237233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.237532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.237542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.237889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.237899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.238235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.238245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.238558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.238566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.238947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.238958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.239300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.239309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.239643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.239652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.239963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.239973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.240285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.240294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.240656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.240665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.240999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.241008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.241400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.241409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.241718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.241728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.242029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.242038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.242242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.242253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.242564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.242574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.242921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.242931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.243124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.243134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.243409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.243419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.243738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.243747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.244079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.244089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.244398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.244407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.244705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.244714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.244930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.244939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.245250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.245260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.245602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.245611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.245897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.245906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.246198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.246207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.246500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.246510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.246847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.246856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.247252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.247262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.247555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.247565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.247861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.247870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.248176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.248186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.248489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.248499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.894 qpair failed and we were unable to recover it. 00:30:49.894 [2024-07-15 15:35:59.248819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.894 [2024-07-15 15:35:59.248828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.249193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.249203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.249544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.249554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.249774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.249784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.250080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.250089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.250475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.250485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.250827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.250840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.251032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.251042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.251357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.251367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.251652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.251662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.252000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.252010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.252332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.252341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.252652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.252661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.253044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.253054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.253387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.253396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.253728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.253737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.253984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.253994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.254311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.254320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.254712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.254721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.255040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.255051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.255363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.255373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.255711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.255721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.256039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.256048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.256373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.256383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.256699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.256709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.257048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.257058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.257363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.257372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.257687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.257696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.257998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.258007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.258385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.258394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.258703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.258712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.259005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.259015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.259337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.259347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.259685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.895 [2024-07-15 15:35:59.259696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.895 qpair failed and we were unable to recover it. 00:30:49.895 [2024-07-15 15:35:59.259921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.259931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.260117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.260127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.260434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.260443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.260775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.260784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.261058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.261068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.261357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.261367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.261676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.261685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.262069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.262079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.262399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.262409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.262738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.262747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.263012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.263022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.263346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.263355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.263671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.263680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.263926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.263935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.264239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.264248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.264545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.264554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.264854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.264864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.265177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.265187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.265493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.265503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.265719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.265728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.265957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.265967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.266298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.266307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.266615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.266625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.266975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.266985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.267183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.267192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.267577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.267586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.267904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.267914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.268242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.268251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.268541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.268550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.268815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.268824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.269125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.269136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.269509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.269518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.269690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.269700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.270028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.270038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.270338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.270348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.270661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.270671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.270972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.270988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.271303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.271313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.271604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.271614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.271899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.271909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.272284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.272293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.896 [2024-07-15 15:35:59.272584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.896 [2024-07-15 15:35:59.272593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.896 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.272907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.272917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.273240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.273250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.273593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.273602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.273893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.273909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.274255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.274264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.274577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.274587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.274803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.274813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.275122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.275132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.275440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.275449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.275782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.275792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.276069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.276079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.276399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.276408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.276722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.276731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.277112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.277122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.277558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.277567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.277843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.277852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.278163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.278173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.278508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.278518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.278824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.278833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.279147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.279157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.279503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.279513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.279846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.279856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.280196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.280206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.280503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.280513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.280834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.280844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.281184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.281197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.281534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.281544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.281850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.281860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.282051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.282063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.282257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.282266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.282552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.282561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.282835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.282845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.283168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.283178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.283519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.283529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.283869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.283879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.284212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.284223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.284429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.284439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.284641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.284650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.284979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.284989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.285302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.285311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.285518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.897 [2024-07-15 15:35:59.285527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.897 qpair failed and we were unable to recover it. 00:30:49.897 [2024-07-15 15:35:59.285733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.285742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.286053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.286063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.286376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.286385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.286760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.286769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.286954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.286966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.287312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.287322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.287663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.287672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.288002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.288012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.288300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.288310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.288541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.288550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.288866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.288875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.289249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.289261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.289586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.289595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.289894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.289904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.290203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.290212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.290557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.290566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.290887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.290902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.291199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.291208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.291522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.291531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.291733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.291742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.292063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.292073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.292222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.292231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.292527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.292536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.292840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.292849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.293156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.293166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.293560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.293569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.293967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.293977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.294198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.294208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.294590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.294600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.294874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.294886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.295100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.295110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.295393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.295403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.295683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.295692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.295893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.295903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.296225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.296234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.296521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.296531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.296868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.296878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.297208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.297217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.297532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.297544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.297851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.297861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.298189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.898 [2024-07-15 15:35:59.298199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.898 qpair failed and we were unable to recover it. 00:30:49.898 [2024-07-15 15:35:59.298581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.298591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.298924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.298934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.299265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.299275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.299572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.299581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.299870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.299880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.300212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.300222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.300533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.300542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.300871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.300881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.301216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.301227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.301557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.301566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.301874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.301887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.302264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.302273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.302586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.302595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.302902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.302914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.303211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.303220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.303546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.303555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.303897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.303907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.304289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.304298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.304588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.304605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.304932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.304942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.305230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.305239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.305565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.305574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.305887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.305898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.306207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.306216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.306556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.306566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.306760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.306770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.307084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.307095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.307399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.307408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.307740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.307749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.308062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.308071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.308385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.308395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.308758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.308767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.309080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.309090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.309401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.309410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.309734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.309744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.899 qpair failed and we were unable to recover it. 00:30:49.899 [2024-07-15 15:35:59.310082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.899 [2024-07-15 15:35:59.310091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.310427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.310436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.310812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.310822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.311143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.311162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.311465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.311475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.311812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.311822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.312022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.312032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.312314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.312323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.312625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.312634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.312935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.312944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.313260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.313269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.313577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.313587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.313889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.313899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.314239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.314248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.314552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.314561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.314744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.314753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.315046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.315056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.315382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.315391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.315686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.315702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.316018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.316027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.316325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.316334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.316654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.316664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.317031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.317040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.317361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.317370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.317708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.317717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.318084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.318093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.318385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.318394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.318716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.318726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.319066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.319077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.319412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.319421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.319747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.319759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.320069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.320080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.320414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.320424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.320734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.320743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.321049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.321058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.321276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.321285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.321597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.321606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.321904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.321913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.322234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.322243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.322537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.322547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.322838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.322847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.900 [2024-07-15 15:35:59.323136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.900 [2024-07-15 15:35:59.323147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.900 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.323459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.323468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.323809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.323818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.324216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.324226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.324578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.324588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.324907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.324916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.325236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.325245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.325583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.325593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.325900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.325910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.326241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.326250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.326540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.326549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.326899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.326913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.327213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.327230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.327538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.327547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.327846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.327856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.328183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.328193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.328487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.328499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.328803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.328812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.329112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.329122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.329456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.329466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.329758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.329768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.330154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.330163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.330454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.330464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.330804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.330813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.331181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.331190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.331551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.331561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.331854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.331864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.332187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.332196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.332533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.332542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.332831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.332841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.333078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.333088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.333400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.333409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.333694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.333704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.333908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.333918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.334244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.334253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.334545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.334555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.334896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.334910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.335222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.335231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.335542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.335552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.335850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.335859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.336206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.336215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.336525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.901 [2024-07-15 15:35:59.336534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.901 qpair failed and we were unable to recover it. 00:30:49.901 [2024-07-15 15:35:59.336863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.336873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.337252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.337261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.337571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.337580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.337894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.337903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.338212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.338222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.338516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.338525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.338893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.338906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.339243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.339252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.339615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.339624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.339928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.339938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.340228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.340238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.340549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.340558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.340844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.340853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.341022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.341033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.341323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.341332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.341653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.341663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.341991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.342001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.342312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.342321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.342658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.342667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.342899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.342913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.343207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.343217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.343544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.343553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.343887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.343896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.344262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.344271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.344600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.344609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.344900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.344910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.345245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.345255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.345496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.345505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.345840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.345849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.346056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.346065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.346403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.346412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.346718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.346727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.347041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.347051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.347334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.347343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.347681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.347690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.347863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.347872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.348199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.348209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.348508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.348517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.348856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.348866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.349058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.349069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.349343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.349352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.902 qpair failed and we were unable to recover it. 00:30:49.902 [2024-07-15 15:35:59.349621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.902 [2024-07-15 15:35:59.349631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.349927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.349939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.350273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.350282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.350596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.350605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.350877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.350890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.351210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.351220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.351456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.351465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.351733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.351742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.352080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.352089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.352429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.352438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.352759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.352768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.353070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.353080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.353372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.353381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.353753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.353762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.354065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.354075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.354382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.354392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.354728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.354737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.354959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.354969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.355269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.355278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.355591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.355600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.355931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.355941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.356256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.356265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.356573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.356583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.356894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.356903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.357198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.357208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.357517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.357527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.357855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.357864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.358049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.358060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.358268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.358280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.358582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.358592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.358891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.358905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.359082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.359093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.359411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.359420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.359717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.359726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.360037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.360053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.903 [2024-07-15 15:35:59.360263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.903 [2024-07-15 15:35:59.360272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.903 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.360587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.360596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.360982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.360991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.361304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.361313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.361648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.361657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.362080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.362090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.362400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.362410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.362740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.362749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.363062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.363072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.363357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.363366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.363663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.363672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.363878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.363890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.364210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.364219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.364587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.364596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.364893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.364903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.365217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.365226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.365525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.365534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.365862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.365871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.366208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.366218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.366531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.366540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.366729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.366742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.367079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.367089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.367426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.367435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.367846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.367855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.368219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.368228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.368565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.368574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.368709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.368719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.369044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.369054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.369359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.369368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.369655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.369665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.370000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.370010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.370327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.370337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.370664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.370674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.371005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.371016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.371351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.371361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.371673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.371682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.371999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.372014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.372379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.372388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.372692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.372701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.372999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.373009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.373372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.373381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.373680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.904 [2024-07-15 15:35:59.373690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.904 qpair failed and we were unable to recover it. 00:30:49.904 [2024-07-15 15:35:59.374042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.374052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.374273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.374282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.374609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.374618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.374897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.374910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.375223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.375232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.375517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.375526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.375864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.375873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.376168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.376178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.376526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.376535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.376861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.376871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.377174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.377184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.377527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.377537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.377826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.377835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.378151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.378161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.378494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.378504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.378794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.378804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.378986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.378997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.379331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.379341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.379693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.379703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.380038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.380048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.380386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.380395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.380717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.380726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.381084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.381094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.381387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.381397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.381586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.381596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.381795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.381804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.382130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.382139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.382436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.382445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.382722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.382732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.383052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.383061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.383415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.383424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.383797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.383807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.384105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.384115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.384422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.384431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.384577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.384587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.384778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.384787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.385083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.385092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.385384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.385394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.385608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.385617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.385937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.385947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.386267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.386276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.905 qpair failed and we were unable to recover it. 00:30:49.905 [2024-07-15 15:35:59.386669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.905 [2024-07-15 15:35:59.386678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.386961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.386970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.387264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.387274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.387594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.387604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.387936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.387947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.388172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.388184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.388487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.388497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.388789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.388800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.389147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.389157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.389449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.389459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.389841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.389851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.390103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.390114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.390438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.390447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.390717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.390726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.391061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.391071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.391406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.391415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.391703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.391712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.392037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.392047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.392348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.392358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.392665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.392674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.392990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.393000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.393334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.393343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.393679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.393689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.394038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.394048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.394354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.394364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.394673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.394682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.395004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.395015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.395331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.395341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.395651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.395661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.395973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.395983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.396318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.396328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.396651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.396660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.396981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.396993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.397306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.397316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.397655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.397666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.397994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.398004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.398308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.398317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.398644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.398653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.398949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.398959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.399266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.399275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.399585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.906 [2024-07-15 15:35:59.399595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.906 qpair failed and we were unable to recover it. 00:30:49.906 [2024-07-15 15:35:59.399905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.399915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.400291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.400300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.400645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.400654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.400965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.400975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.401287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.401296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.401670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.401680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.402012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.402021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.402350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.402359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.402671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.402680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.403014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.403024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.403323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.403332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.403646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.403655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.403987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.403997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.404204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.404214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.404547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.404556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.404870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.404879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.405221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.405231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.405526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.405536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.405858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.405867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.406192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.406202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.406540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.406549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.406857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.406867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.407212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.407223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.407543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.407553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.407890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.407901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.408227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.408236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.408582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.408591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.408925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.408935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.409242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.409252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.409577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.409586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.409980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.409989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.410317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.410327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.410657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.410666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.411045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.411056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.411340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.411349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.411663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.411672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.411845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.411856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.412171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.412181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.412385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.412394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.412714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.412723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.413059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.413069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.907 qpair failed and we were unable to recover it. 00:30:49.907 [2024-07-15 15:35:59.413404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.907 [2024-07-15 15:35:59.413413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.413759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.413769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.413995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.414005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.414365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.414375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.414676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.414686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.414984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.414994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.415281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.415290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.415603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.415612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.415991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.416001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.416323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.416332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.416653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.416662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.416938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.416948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.417273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.417283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.417550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.417559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.417872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.417881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.418110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.418119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.418451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.418460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.418579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.418587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.418764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.418776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.419089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.419099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.419393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.419402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.419691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.419700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.420020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.420029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.420356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.420365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.420705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.420714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.420940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.420950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.421265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.421275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.421587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.421597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.421934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.421944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.422252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.422262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.422652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.422662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.422957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.422970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.423288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.423298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.423636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.423645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.423956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.423966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.424262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.908 [2024-07-15 15:35:59.424272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.908 qpair failed and we were unable to recover it. 00:30:49.908 [2024-07-15 15:35:59.424610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.424619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.424952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.424963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.425315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.425324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.425630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.425640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.425979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.425988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.426290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.426305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.426589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.426598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.426962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.426972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.427270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.427280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.427624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.427636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.427878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.427890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.428120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.428129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.428434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.428443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.428741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.428750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.428944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.428954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.429294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.429304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.429640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.429650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.429949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.429959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.430283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.430292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.430606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.430615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.430947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.430957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.431283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.431292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.431598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.431608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.431920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.431930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.432238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.432247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.432553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.432562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.432776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.432785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.433079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.433089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.433505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.433514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.433799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.433809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.434142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.434152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.434444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.434454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.434794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.434804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.435160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.435170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.435507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.435516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.435836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.435846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.436190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.436202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.436514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.436523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.436927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.436937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.437137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.437146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.909 [2024-07-15 15:35:59.437458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.909 [2024-07-15 15:35:59.437467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.909 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.437773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.437782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.438118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.438128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.438516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.438525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.438859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.438868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.439199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.439209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.439577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.439587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.439895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.439909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.440219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.440228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.440571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.440580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.440873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.440886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.441174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.441184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.441417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.441425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.441718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.441733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.442056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.442065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.442372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.442382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.442681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.442691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.443034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.443044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.443366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.443375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.443675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.443685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.444077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.444088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.444405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.444414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.444744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.444753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.445064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.445073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.445295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.445304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.445429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.445439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.445712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.445722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.446036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.446045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.446370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.446379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.446675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.446685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.446876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.446889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.447207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.447216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.447558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.447567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.447896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.447910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.448224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.448234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.448557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.448566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.448861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.448870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.449248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.449258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.449602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.449612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.449770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.449781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.450071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.450081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.450399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.910 [2024-07-15 15:35:59.450408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.910 qpair failed and we were unable to recover it. 00:30:49.910 [2024-07-15 15:35:59.450730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.450739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.450970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.450980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.451323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.451332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.451657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.451667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.451965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.451975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.452299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.452308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.452606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.452615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.452930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.452939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.453149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.453159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.453443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.453453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.453782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.453792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.453987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.453996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.454322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.454332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.454636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.454646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.455021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.455030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.455359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.455368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.455716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.455725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.456073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.456084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.456393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.456402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.456710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.456726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.457076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.457085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.457409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.457419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.457754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.457765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.458093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.458109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.458456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.458465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.458846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.458855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.459078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.459088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.459466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.459475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.459772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.459782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.460012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.460022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.460351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.460361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.460544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.460554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.460773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.460782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.460938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.460947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.461165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.461174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.461459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.461468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.461777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.461787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.462119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.462129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.462522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.462532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.462850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.462860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.463190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.911 [2024-07-15 15:35:59.463200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.911 qpair failed and we were unable to recover it. 00:30:49.911 [2024-07-15 15:35:59.463402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.463411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.463736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.463745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.464087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.464097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.464436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.464446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.464757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.464767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.465107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.465122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.465387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.465396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.465604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.465613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.465935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.465947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.466251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.466261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.466460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.466469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.466776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.466785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.467119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.467128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.467526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.467535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.467860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.467869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.468136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.468146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.468456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.468466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.468674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.468684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.469030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.469039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.469349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.469358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.469677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.469686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.470067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.470077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.470368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.470378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.470589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.470598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.470924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.470940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.471269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.471278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.471618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.471627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.471845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.471854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.472216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.472226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.472537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.472546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.472870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.472881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.473215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.473224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.473558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.473568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.473877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.473891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.474203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.474214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.474403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.912 [2024-07-15 15:35:59.474413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.912 qpair failed and we were unable to recover it. 00:30:49.912 [2024-07-15 15:35:59.474765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.474775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.474857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.474866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.475229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.475238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.475557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.475567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.475893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.475907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.476262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.476271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.476573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.476582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.476785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.476795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.477114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.477124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.477467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.477476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.477796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.477806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.478132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.478142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.478522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.478531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.478838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.478848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.479147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.479156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.479360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.479369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.479679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.479688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.479908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.479919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.480130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.480139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.480495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.480505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.480832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.480841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.481187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.481197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.481399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.481408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.481695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.481704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.481938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.481948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.482268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.482278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.482596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.482605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.482921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.482930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.483262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.483272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.483458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.483468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.483695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.483705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.484057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.484067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.484372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.484382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.484695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.484705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.484898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.484909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.485141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.485150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.485457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.485468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.485775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.485784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.486103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.486113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.486406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.486416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.486750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.486766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.913 qpair failed and we were unable to recover it. 00:30:49.913 [2024-07-15 15:35:59.487089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.913 [2024-07-15 15:35:59.487098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.487285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.487294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.487680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.487689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.487985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.487995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.488264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.488273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.488486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.488495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.488807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.488816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.489127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.489136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.489446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.489455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.489778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.489788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.490185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.490195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.490495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.490504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.490837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.490846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.491167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.491177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.491481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.491491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.491831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.491840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.492222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.492232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.492546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.492555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.492867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.492876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.493080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.493090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.493403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.493412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.493725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.493734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.494052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.494062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.494444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.494453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.494768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.494777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.495112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.495122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.495439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.495451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.495640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.495651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.495851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.495861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.496189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.496199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.496514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.496523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.496818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.496827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:49.914 [2024-07-15 15:35:59.496946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:49.914 [2024-07-15 15:35:59.496956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:49.914 qpair failed and we were unable to recover it. 00:30:50.192 [2024-07-15 15:35:59.497219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.192 [2024-07-15 15:35:59.497229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.192 qpair failed and we were unable to recover it. 00:30:50.192 [2024-07-15 15:35:59.497555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.192 [2024-07-15 15:35:59.497565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.192 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.497774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.497783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.498094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.498104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.498407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.498416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.498728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.498737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.499011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.499020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.499339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.499349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.499663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.499673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.499893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.499908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.500119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.500128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.500342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.500351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.500541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.500552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.500882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.500896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.501228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.501238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.501554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.501563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.501857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.501866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.502199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.502208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.502398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.502407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.502730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.502740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.503035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.503048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.503353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.503362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.503683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.503693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.503991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.504001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.504346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.504356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.504691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.504700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.504895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.504905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.193 qpair failed and we were unable to recover it. 00:30:50.193 [2024-07-15 15:35:59.505210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.193 [2024-07-15 15:35:59.505219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.505517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.505527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.505848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.505857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.506261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.506270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.506579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.506589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.506734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.506743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.507051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.507061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.507442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.507452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.507621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.507630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.507953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.507963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.508289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.508299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.508488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.508499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.508793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.508802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.509101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.509110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.509399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.509408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.509736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.509745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.510067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.510077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.510387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.510396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.510714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.510724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.511040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.511050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.511344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.511353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.511647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.511657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.511946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.511956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.512278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.512288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.512627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.512637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.512981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.512991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.513290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.513300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.513621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.513630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.513958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.194 [2024-07-15 15:35:59.513967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.194 qpair failed and we were unable to recover it. 00:30:50.194 [2024-07-15 15:35:59.514297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.514307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.514617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.514628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.514944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.514953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.515299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.515308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.515646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.515655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.515958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.515968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.516288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.516297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.516637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.516647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.516993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.517003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.517311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.517327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.517636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.517645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.517988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.517999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.518322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.518331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.518561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.518570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.518708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.518717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.519023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.519033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.519356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.519365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.519676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.519685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.519995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.520005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.520352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.520361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.520658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.520667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.520975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.520984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.195 [2024-07-15 15:35:59.521305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.195 [2024-07-15 15:35:59.521315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.195 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.521636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.521646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.521951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.521961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.522282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.522291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.522561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.522570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.522901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.522911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.523231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.523241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.523553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.523562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.523879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.523898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.524121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.524130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.524473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.524485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.524811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.524821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.525190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.525200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.525496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.525506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.525801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.525810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.526129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.526139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.526472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.526481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.526809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.526819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.527139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.527150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.527438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.527448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.527607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.527618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.527948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.527958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.528301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.528310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.528617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.528626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.196 [2024-07-15 15:35:59.528985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.196 [2024-07-15 15:35:59.528995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.196 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.529323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.529332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.529559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.529568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.529830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.529839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.530136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.530146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.530445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.530454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.530745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.530754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.531057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.531067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.531376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.531386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.531681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.531690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.531979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.531990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.532302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.532312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.532629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.532640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.532980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.532992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.533304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.533321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.533638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.533647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.534047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.534057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.534394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.534403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.534741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.534751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.535039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.535049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.535364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.535374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.535554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.535564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.535959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.535969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.536261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.197 [2024-07-15 15:35:59.536270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.197 qpair failed and we were unable to recover it. 00:30:50.197 [2024-07-15 15:35:59.536594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.536603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.536935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.536946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.537254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.537263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.537580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.537596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.537919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.537929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.538323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.538332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.538615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.538632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.538945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.538955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.539269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.539278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.539626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.539635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.539814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.539825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.540158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.540168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.540485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.540494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.540826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.540835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.541217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.541227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.541557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.541566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.541926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.541935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.542256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.542265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.542466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.542475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.542848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.542858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.543089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.543100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.543376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.543386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.543726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.543736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.543921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.198 [2024-07-15 15:35:59.543932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.198 qpair failed and we were unable to recover it. 00:30:50.198 [2024-07-15 15:35:59.544208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.544217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.544553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.544562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.544901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.544911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.545208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.545218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.545542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.545551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.545845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.545855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.546180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.546190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.546505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.546515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.546831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.546840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.547220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.547231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.547568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.547577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.547899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.547909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.548261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.548271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.548595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.548604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.548797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.548806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.549130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.549140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.549303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.549313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.549697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.549707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.549918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.549927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.550226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.550235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.550549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.199 [2024-07-15 15:35:59.550558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.199 qpair failed and we were unable to recover it. 00:30:50.199 [2024-07-15 15:35:59.550897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.550912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.551250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.551259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.551534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.551543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.551860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.551869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.552256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.552267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.552585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.552594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.552947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.552956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.553273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.553283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.553604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.553613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.553952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.553963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.554289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.554298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.554619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.554629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.554968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.554981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.555186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.555196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.555525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.555534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.555837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.555847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.556057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.556067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.556397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.556406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.200 [2024-07-15 15:35:59.556730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.200 [2024-07-15 15:35:59.556739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.200 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.557059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.557069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.557301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.557310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.557607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.557617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.557814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.557825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.558148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.558157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.558478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.558487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.558819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.558829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.559172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.559182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.559466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.559476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.559817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.559827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.560120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.560130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.560438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.560448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.560761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.560771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.561004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.561014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.561404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.561413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.561726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.561736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.562043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.562052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.562358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.562368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.562647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.562656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.562945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.562962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.563369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.563380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.563689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.563698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.563999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.564009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.564320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.564329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.564649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.564659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.565017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.565026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.565404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.565412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.565712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.565729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.566046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.566055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.566365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.566375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.566714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.566723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.566960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.566970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.567275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.567284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.567577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.567587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.567929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.567939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.568277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.568286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.568607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.568616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.568951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.568962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.569270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.569279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.569589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.569606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.569933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.569943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.570133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.570143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.570481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.570491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.570802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.570812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.571124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.571134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.571464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.201 [2024-07-15 15:35:59.571474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.201 qpair failed and we were unable to recover it. 00:30:50.201 [2024-07-15 15:35:59.571817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.571826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.572151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.572166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.572489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.572498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.572691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.572701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.573035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.573045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.573333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.573343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.573649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.573658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.573991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.574001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.574326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.574335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.574657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.574666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.574850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.574859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.575164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.575175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.575369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.575378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.575692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.575702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.576020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.576029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.576430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.576440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.576742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.576752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.576960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.576970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.577319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.577328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.577659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.577668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.577962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.577972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.578265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.578274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.578594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.578604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.578949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.578960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.579139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.579148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.579474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.579483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.579824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.579833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.580141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.580152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.580460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.580470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.580792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.580802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.581135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.581145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.581456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.581473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.581804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.581813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.582139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.582149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.582465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.582474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.582869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.582878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.583213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.583223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.583488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.583497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.583861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.583870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.584083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.584093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.584465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.584474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.584678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.584688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.584998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.585010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.585360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.585369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.585734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.585743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.586052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.586061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.586378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.586387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.586728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.586738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.587078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.587089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.587397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.587407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.587725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.587735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.588071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.588080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.588421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.588431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.588759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.588768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.589083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.589093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.589396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.589405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.202 qpair failed and we were unable to recover it. 00:30:50.202 [2024-07-15 15:35:59.589741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.202 [2024-07-15 15:35:59.589750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.590067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.590077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.590366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.590376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.590709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.590719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.591023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.591033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.591224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.591235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.591570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.591579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.591906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.591915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.592235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.592251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.592568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.592577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.592898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.592908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.593232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.593241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.593434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.593444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.593623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.593635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.593991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.594000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.594294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.594303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.594646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.594655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.594973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.594983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.595318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.595327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.595664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.595673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.596019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.596029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.596346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.596356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.596670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.596679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.597015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.597025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.597339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.597348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.597667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.597682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.597999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.598009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.598303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.598312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.598608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.598617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.598929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.598940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.599257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.599267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.599456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.599466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.599784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.599793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.600119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.600128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.600441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.600450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.600820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.600829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.601152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.601162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.601493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.601503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.601869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.601878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.602182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.602192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.602508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.602520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.602842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.602852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.603168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.603178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.603357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.603367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.603698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.603707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.604015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.604025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.604342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.604351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.604668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.604677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.605062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.605072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.605271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.203 [2024-07-15 15:35:59.605280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.203 qpair failed and we were unable to recover it. 00:30:50.203 [2024-07-15 15:35:59.605551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.605561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.605757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.605767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.606077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.606087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.606453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.606462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.606754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.606764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.607033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.607043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.607351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.607361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.607549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.607559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.607838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.607847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.608224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.608234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.608546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.608555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.608892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.608902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.609307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.609316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.609602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.609611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.609915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.609925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.610159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.610168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.610503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.610512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.610845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.610855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.611171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.611181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.611494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.611504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.611836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.611846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.612181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.612191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.612577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.612587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.612775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.612786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.613064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.613075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.613372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.613382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.613706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.613715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.614076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.614085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.614251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.614261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.614562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.614571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.614900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.614914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.615241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.615250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.615540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.615550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.615847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.615856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.616237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.616247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.616589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.616598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.616988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.616997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.617307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.617317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.617625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.617634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.617921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.617931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.618277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.618287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.618585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.618595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.618937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.618947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.619260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.619269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.619579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.619588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.204 [2024-07-15 15:35:59.619891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.204 [2024-07-15 15:35:59.619901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.204 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.620249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.620258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.620497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.620506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.620705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.620715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.621019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.621029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.621361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.621370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.621594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.621602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.621898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.621908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.622241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.622250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.622592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.622602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.622918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.622928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.623113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.623124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.623434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.623444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.623732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.623743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.624063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.624072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.624393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.624402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.624737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.624746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.625088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.625097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.625386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.625396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.625726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.625735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.626092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.626101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.626420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.626429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.626745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.626755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.627120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.627130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.627437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.627446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.627745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.627754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.627944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.627954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.628188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.628198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.628508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.628518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.628856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.628865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.629208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.629218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.629605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.629615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.629952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.629962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.630307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.630316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.630557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.630566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.630868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.630878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.631226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.631238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.631469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.631478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.631841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.631850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.632213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.632222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.632556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.632567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.632909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.632919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.633239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.633248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.633582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.633591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.633891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.633900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.634185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.634194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.634504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.634514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.634832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.634841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.635061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.635071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.635378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.635387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.635711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.635720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.636033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.636042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.636381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.636391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.636732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.636741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.637031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.637041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.637387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.637396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.637695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.637705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.638045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.638054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.638369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.638378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.638709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.638718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.639054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.639065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.205 [2024-07-15 15:35:59.639358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.205 [2024-07-15 15:35:59.639367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.205 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.639677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.639692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.640014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.640023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.640358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.640368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.640718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.640727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.641017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.641034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.641343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.641354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.641586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.641595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.641929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.641939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.642235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.642244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.642561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.642570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.642903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.642914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.642996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.643005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.643325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.643334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.643689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.643697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.644018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.644028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.644341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.644350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.644637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.644646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.644960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.644970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.645281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.645290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.645588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.645598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.645800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.645809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.646005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.646014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.646196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.646205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.646527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.646536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.646857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.646867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.647195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.647206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.647516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.647531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.647905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.647914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.648208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.648217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.648511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.648528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.648813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.648823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.649203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.649212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.649617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.649626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.649959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.649968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.650282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.650292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.650629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.650639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.650988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.650998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.651195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.651204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.651525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.651535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.651851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.651861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.652061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.652070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.652384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.652393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.652770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.652780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.653099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.653109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.653319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.653328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.653666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.653676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.654011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.654023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.654332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.654341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.654538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.654549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.654742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.654752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.655076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.655086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.655393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.655402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.655719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.655728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.655962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.655972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.656283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.656292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.656489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.656499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.656805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.656814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.657151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.657161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.657499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.657508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.657844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.206 [2024-07-15 15:35:59.657853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.206 qpair failed and we were unable to recover it. 00:30:50.206 [2024-07-15 15:35:59.658239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.658249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.658542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.658552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.658893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.658906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.659238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.659248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.659569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.659579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.659900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.659910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.660286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.660296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.660582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.660591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.660904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.660914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.661239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.661248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.661566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.661583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.661919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.661929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.662228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.662238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.662551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.662562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.662750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.662759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.663072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.663082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.663381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.663390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.663699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.663708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.664006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.664017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.664330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.664339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.664655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.664670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.665020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.665029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.665407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.665416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.665716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.665726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.666040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.666049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.666355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.666364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.666651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.666660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.666952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.666970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.667287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.667296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.667612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.667622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.667943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.667953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.668278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.668287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.668601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.668610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.668904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.668914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.669229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.669239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.669529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.669538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.669902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.669911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.670185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.670195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.670527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.670536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.670737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.670746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.671067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.671079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.671432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.671441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.671702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.207 [2024-07-15 15:35:59.671711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.207 qpair failed and we were unable to recover it. 00:30:50.207 [2024-07-15 15:35:59.672020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.672030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.672360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.672370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.672765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.672775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.673143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.673153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.673475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.673485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.673817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.673826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.674160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.674171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.674452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.674461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.674809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.674818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.675146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.675156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.675477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.675486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.675695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.675704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.675936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.675946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.676270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.676279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.676590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.676599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.676973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.676982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.677278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.677288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.677597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.677606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.677897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.677906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.678209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.678218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.678542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.678552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.678870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.678879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.679245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.679256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.679685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.679694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.679927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.679937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.680252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.680261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.680585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.680595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.680971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.680981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.681268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.681277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.681585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.681594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.681923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.681932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.682247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.682257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.682559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.682568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.682907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.682917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.683216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.683225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.683561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.683571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.683877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.683896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.684091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.684101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.684507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.684516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.684881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.684896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.685230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.685240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.685562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.685572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.685891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.685900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.686227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.686237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.686571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.686580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.686892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.686905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.687232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.687242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.687585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.687595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.687910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.687920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.688244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.688253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.688557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.688566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.688853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.688862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.689197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.689207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.689446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.689455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.689739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.689748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.689966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.689976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.690066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.690075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.690397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.690407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.690718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.690727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.691044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.691054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.691375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.691385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.691647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.691657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.691988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.691999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.208 qpair failed and we were unable to recover it. 00:30:50.208 [2024-07-15 15:35:59.692309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.208 [2024-07-15 15:35:59.692318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.692622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.692632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.692959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.692971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.693180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.693189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.693551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.693560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.693891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.693901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.694213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.694223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.694548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.694557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.694873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.694887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.695209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.695219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.695531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.695541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.695871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.695881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.696202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.696212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.696403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.696414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.696782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.696792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.697095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.697105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.697443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.697454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.697755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.697765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.698002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.698013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.698331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.698341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.698692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.698702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.699046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.699057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.699372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.699382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.699714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.699724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.699923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.699933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.700280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.700289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.700614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.700624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.700947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.700958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.701300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.701310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.701641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.701655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.701968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.701978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.702303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.702313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.702655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.702665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.702853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.702864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.703175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.703186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.703508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.703518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.703850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.703860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.704193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.704203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.704519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.704529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.704846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.704856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.705174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.705184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.705521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.705531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.705920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.705930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.706219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.706229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.706528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.706537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.706855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.706865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.707209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.707220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.707422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.707431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.707639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.707649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.707979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.707989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.708394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.708404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.708716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.708725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.708925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.708935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.709273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.709282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.709352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.709361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.709688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.709698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.710084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.710096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.710274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.710283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.710576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.710585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.710898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.710913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.711285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.711295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.711481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.711490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.711834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.711843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.712148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.712158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.712348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.712357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.712671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.712680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.713028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.209 [2024-07-15 15:35:59.713037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.209 qpair failed and we were unable to recover it. 00:30:50.209 [2024-07-15 15:35:59.713353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.210 [2024-07-15 15:35:59.713363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.210 qpair failed and we were unable to recover it. 00:30:50.210 [2024-07-15 15:35:59.713709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.210 [2024-07-15 15:35:59.713719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.210 qpair failed and we were unable to recover it. 00:30:50.210 [2024-07-15 15:35:59.714042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.714051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.211 qpair failed and we were unable to recover it. 00:30:50.211 [2024-07-15 15:35:59.714354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.714364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.211 qpair failed and we were unable to recover it. 00:30:50.211 [2024-07-15 15:35:59.714565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.714575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.211 qpair failed and we were unable to recover it. 00:30:50.211 [2024-07-15 15:35:59.714888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.714902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.211 qpair failed and we were unable to recover it. 00:30:50.211 [2024-07-15 15:35:59.715228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.715237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.211 qpair failed and we were unable to recover it. 00:30:50.211 [2024-07-15 15:35:59.715617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.715626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.211 qpair failed and we were unable to recover it. 00:30:50.211 [2024-07-15 15:35:59.715961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.715970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.211 qpair failed and we were unable to recover it. 00:30:50.211 [2024-07-15 15:35:59.716292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.716302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.211 qpair failed and we were unable to recover it. 00:30:50.211 [2024-07-15 15:35:59.716627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.716636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.211 qpair failed and we were unable to recover it. 00:30:50.211 [2024-07-15 15:35:59.716981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.716990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.211 qpair failed and we were unable to recover it. 00:30:50.211 [2024-07-15 15:35:59.717331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.717340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.211 qpair failed and we were unable to recover it. 00:30:50.211 [2024-07-15 15:35:59.717524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.717533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.211 qpair failed and we were unable to recover it. 00:30:50.211 [2024-07-15 15:35:59.717875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.717888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.211 qpair failed and we were unable to recover it. 00:30:50.211 [2024-07-15 15:35:59.718068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.718078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.211 qpair failed and we were unable to recover it. 00:30:50.211 [2024-07-15 15:35:59.718365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.718375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.211 qpair failed and we were unable to recover it. 00:30:50.211 [2024-07-15 15:35:59.718667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.718683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.211 qpair failed and we were unable to recover it. 00:30:50.211 [2024-07-15 15:35:59.718954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.211 [2024-07-15 15:35:59.718964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.719158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.719167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.719502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.719512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.719823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.719832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.720158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.720168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.720545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.720554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.720766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.720775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.720930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.720940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.721256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.721265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.721614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.721623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.721847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.721857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.722178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.722188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.722521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.722532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.722812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.722829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.723146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.723156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.723475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.723484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.723805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.723814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.724131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.724141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.724347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.724357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.724667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.724676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.724984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.724994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.725317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.725326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.725653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.725662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.725987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.725997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.726123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.726132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.726442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.726451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.726752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.726761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.727105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.727115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.727411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.727426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.727762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.727772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.728155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.728164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.728325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.728334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.728661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.728670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.729032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.729041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.729386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.729395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.729711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.729720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.730027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.730037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.730339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.730348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.730521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.730530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.730858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.730869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.731221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.731231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.731524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.731533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.731845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.731854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.732181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.732192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.732386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.732395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.732746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.732755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.733122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.733132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.733297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.733306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.733693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.733702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.734065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.734074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.734275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.734284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.734613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.734622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.734936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.734946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.735003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.735013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.735212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.212 [2024-07-15 15:35:59.735221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.212 qpair failed and we were unable to recover it. 00:30:50.212 [2024-07-15 15:35:59.735514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.735523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.735851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.735860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.736164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.736175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.736485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.736494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.736837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.736847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.737178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.737188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.737510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.737524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.737879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.737891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.738245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.738254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.738418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.738426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.738813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.738822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.739108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.739121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.739444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.739453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.739786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.739795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.739965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.739975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.740400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.740409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.740601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.740610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.740827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.740837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.741066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.741075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.741473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.741483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.741853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.741863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.742182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.742192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.742427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.742437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.742654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.742664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.742986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.742997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.743324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.743334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.743710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.743719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.744041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.744051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.744362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.744371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.744603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.744612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.744930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.744939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.745258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.745267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.745565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.745574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.745906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.745916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.746241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.746250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.746443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.746452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.746772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.746782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.747102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.747112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.747410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.747420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.747762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.747771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.747960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.747970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.748322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.748332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.748636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.748646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.748974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.748984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.749227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.749236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.749555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.749564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.749862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.749871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.750196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.750205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.750590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.750599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.750929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.750939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.751226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.751235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.751532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.751541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.751833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.751842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.752111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.752121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.752518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.752527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.752814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.752824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.753140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.753149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.753309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.753319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.213 [2024-07-15 15:35:59.753736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.213 [2024-07-15 15:35:59.753745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.213 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.754066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.754076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.754372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.754382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.754716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.754725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.755057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.755067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.755403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.755413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.755709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.755718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.755954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.755964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.756289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.756298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.756575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.756585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.756895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.756904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.757202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.757211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.757522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.757532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.757857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.757866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.758201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.758211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.758546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.758555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.758844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.758854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.759181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.759191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.759483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.759492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.759775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.759784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.760142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.760152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.760489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.760500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.760865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.760875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.761195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.761205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.761509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.761519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.761830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.761839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.762223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.762232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.762545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.762554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.762757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.762767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.763080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.763090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.763424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.763434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.763727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.763737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.764062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.764071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.764340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.764350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.764679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.764688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.764925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.764934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.765228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.765237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.765540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.765549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.765852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.765861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.766185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.766196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.766396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.766405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.766711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.766720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.767023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.767033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.767324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.767333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.214 [2024-07-15 15:35:59.767649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.214 [2024-07-15 15:35:59.767658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.214 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.768036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.768045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.768230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.768239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.768589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.768599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.768784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.768796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.769015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.769026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.769357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.769366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.769659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.769669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.769988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.769998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.770224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.770233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.770550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.770558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.770872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.770881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.771210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.771220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.771557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.771567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.771907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.771918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.772249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.772258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.772608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.772617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.772779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.772789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.773098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.773108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.773414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.773424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.773653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.773662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.774013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.774023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.774340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.774349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.774562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.774571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.774863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.774872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.775173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.775184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.775473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.775482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.775763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.775772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.776101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.776111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.776453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.776463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.776800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.776809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.777182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.777192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.777477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.777486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.777853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.777862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.778183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.778193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.778584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.778593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.778930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.778940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.779271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.779280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.779629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.779639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.779953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.779962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.780130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.780140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.780461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.780471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.780775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.780785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.781109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.781119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.781438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.781447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.781784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.781794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.782123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.782132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.782447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.782465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.782775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.782784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.783083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.783094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.783400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.783410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.783722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.783732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.784037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.784047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.784233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.784244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.784604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.784614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.784856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.784865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.785188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.785198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.785491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.785501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.785842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.785851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.786143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.786154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.786479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.786489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.786835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.786845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.215 [2024-07-15 15:35:59.787171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.215 [2024-07-15 15:35:59.787182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.215 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.787494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.787504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.787817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.787827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.788198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.788208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.788510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.788521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.788856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.788866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.789081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.789091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.789434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.789443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.789787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.789797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.790132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.790143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.790505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.790519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.790734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.790744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.791076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.791086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.791399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.791408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.791679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.791688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.792416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.792438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.792734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.792745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.793154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.793165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.793482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.793491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.793774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.793785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.794106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.794116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.794432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.794443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.794757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.794767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.216 [2024-07-15 15:35:59.795011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.216 [2024-07-15 15:35:59.795022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.216 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.795352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.795363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.795652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.795662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.795977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.795987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.796366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.796376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.796710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.796720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.797041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.797051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.797368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.797378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.797494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.797504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.797840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.797850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.798292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.798307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.798597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.798608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.798953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.798964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.799308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.799318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.799640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.799653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.799972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.799982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.800302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.800311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.800602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.800611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.800782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.800793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.801126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.801137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.801420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.801429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.801721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.801731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.802050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.802060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.802378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.802388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.802722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.802731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.803125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.803136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.803450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.803461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.803804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.803813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.804199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.804209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.804523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.804535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.804859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.804869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.805207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.805216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.805553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.805563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.805865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.805875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.806161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.806170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.806481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.806490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.806793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.806802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.807122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.807132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.807350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.807359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.512 [2024-07-15 15:35:59.807581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.512 [2024-07-15 15:35:59.807590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.512 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.807899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.807908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.808198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.808209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.808407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.808416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.808636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.808651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.808969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.808979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.809342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.809351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.809692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.809701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.810023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.810033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.810418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.810427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.810747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.810756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.811096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.811106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.811381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.811390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.811686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.811695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.812080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.812089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.812469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.812479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.812812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.812822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.813147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.813156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.813543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.813553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.813872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.813882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.814189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.814199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.814493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.814502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.814796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.814805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.815137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.815148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.815463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.815473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.815814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.815824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.816020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.816031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.816322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.816331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.816662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.816672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.817021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.817031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.817440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.817450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.817661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.817670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.817996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.818005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.818305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.818314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.818515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.818525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.818739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.818748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.819063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.819074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.819446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.819455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.819754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.819765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.820084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.820094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.820395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.513 [2024-07-15 15:35:59.820404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.513 qpair failed and we were unable to recover it. 00:30:50.513 [2024-07-15 15:35:59.820772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.820781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.821091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.821100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.821382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.821391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.821709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.821718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.822111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.822120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.822418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.822428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.822737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.822746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.823079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.823090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.823323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.823333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.823538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.823547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.823871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.823880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.824112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.824121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.824439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.824448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.824790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.824800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.825128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.825138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.825432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.825441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.825770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.825780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.826007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.826017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.826280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.826289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.826656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.826665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.826919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.826929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.827224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.827233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.827495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.827504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.827790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.827799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.828140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.828150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.828504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.828513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.828806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.828816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.829143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.829153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.829459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.829469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.829811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.829822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.830193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.830203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.830400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.830409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.830801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.830811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.831122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.831133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.831449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.831458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.831781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.831790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.831987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.831997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.832339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.832348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.832666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.832675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.833049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.833059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.833477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.833486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.514 qpair failed and we were unable to recover it. 00:30:50.514 [2024-07-15 15:35:59.833827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.514 [2024-07-15 15:35:59.833836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.834176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.834187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.834516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.834525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.834850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.834859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.835071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.835082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.835352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.835361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.835692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.835703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.836024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.836034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.836335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.836345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.837126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.837147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.837449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.837460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.837796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.837805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.838171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.838182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.838526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.838535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.839401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.839423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.839749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.839762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.839981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.839992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.840279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.840289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.840604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.840614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.840831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.840840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.841172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.841182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.841490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.841499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.841810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.841820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.842142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.842152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.842465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.842474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.842795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.842811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.843031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.843043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.843371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.843380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.843725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.843735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.844048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.844059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.844366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.844376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.844711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.844722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.845037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.845047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.845350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.845359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.845678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.845688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.846064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.846074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.846422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.846431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.846732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.846742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.846803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.846814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.847115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.847125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.847462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.847472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.515 qpair failed and we were unable to recover it. 00:30:50.515 [2024-07-15 15:35:59.847652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.515 [2024-07-15 15:35:59.847663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.848016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.848027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.848347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.848357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.848652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.848661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.848957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.848966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.849176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.849185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.849506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.849515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.849744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.849754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.849951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.849960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.850161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.850171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.850548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.850558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.850899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.850916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.851246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.851256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.851547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.851564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.851928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.851938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.852214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.852224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.852573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.852582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.852797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.852806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.853137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.853147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.853483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.853492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.853830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.853840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.854059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.854068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.854396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.854406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.854695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.854705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.855081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.855091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.855288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.855298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.855588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.855598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.855940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.855949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.856324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.856334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.856664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.856673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.857004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.857014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.857304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.857314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.857603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.857613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.857804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.516 [2024-07-15 15:35:59.857813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.516 qpair failed and we were unable to recover it. 00:30:50.516 [2024-07-15 15:35:59.858146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.858157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.858493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.858503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.858850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.858859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.859051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.859062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.859414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.859424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.859778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.859787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.860112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.860122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.860487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.860496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.860783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.860800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.861112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.861122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.861458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.861468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.861781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.861790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.862116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.862126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.862461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.862471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.862805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.862816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.863175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.863186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.863480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.863490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.863651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.863660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.863860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.863869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.864218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.864228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.864458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.864467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.864762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.864772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.864997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.865007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.865343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.865352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.865667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.865676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.865969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.865979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.866174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.866184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.866508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.866517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.866803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.866813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.867117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.867128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.867428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.867437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.867756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.867767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.868059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.868069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.868428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.868438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.868780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.868789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.869108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.869121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.869427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.869436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.869771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.869780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.870094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.870104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.870345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.870354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.517 [2024-07-15 15:35:59.870532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.517 [2024-07-15 15:35:59.870542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.517 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.870769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.870778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.871084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.871094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.871398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.871407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.871720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.871730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.872112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.872122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.872397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.872407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.872744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.872754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.872976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.872986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.873335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.873344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.873675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.873685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.873998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.874008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.874373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.874382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.874734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.874744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.875062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.875072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.875394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.875405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.875597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.875606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.875946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.875955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.876153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.876163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.876496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.876505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.876795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.876804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.877024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.877034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.877304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.877316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.877636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.877646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.877962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.877972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.878156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.878165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.878407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.878417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.878780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.878789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.879109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.879120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.879461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.879471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.879814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.879823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.880108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.880117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.880310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.880319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.880638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.880647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.880948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.880957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.881301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.881310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.881615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.881625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.881926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.881936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.882243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.882252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.882472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.882481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.882690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.882699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.882983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.518 [2024-07-15 15:35:59.882993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.518 qpair failed and we were unable to recover it. 00:30:50.518 [2024-07-15 15:35:59.883332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.883341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.883654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.883663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.883875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.883888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.884193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.884202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.884485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.884494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.884779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.884788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.885125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.885135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.885435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.885444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.885763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.885772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.886095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.886112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.886425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.886435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.886781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.886791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.887022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.887032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.887372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.887381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.887586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.887595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.887939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.887949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.888155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.888164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.888368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.888377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.888682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.888691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.889007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.889017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.889196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.889207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.889481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.889490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.889703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.889712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.890060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.890069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.890381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.890390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.890583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.890593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.890832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.890842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.891040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.891051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.891395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.891405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.891620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.891630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.891937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.891946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.892244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.892253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.892558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.892567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.892891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.892900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.893190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.893200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.893502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.893512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.893853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.893862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.894021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.894032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.894341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.894350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.894692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.894701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.895039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.519 [2024-07-15 15:35:59.895050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.519 qpair failed and we were unable to recover it. 00:30:50.519 [2024-07-15 15:35:59.895370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.895380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.895594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.895603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.895804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.895813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.895965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.895975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.896208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.896217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.896526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.896535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.896830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.896839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.897171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.897183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.897506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.897516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.897839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.897848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.898228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.898237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.898536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.898545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.898736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.898745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.899013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.899023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.899321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.899330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.899648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.899657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.900070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.900079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.900369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.900387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.900597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.900607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.900920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.900929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.901248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.901258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.901573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.901582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.901834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.901843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.902197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.902207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.902546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.902555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.902836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.902845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.903139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.903149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.903458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.903467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.903661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.903671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.903869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.903878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.904189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.904200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.904486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.904496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.904804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.904813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.905117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.905127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.905440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.905452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.905832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.520 [2024-07-15 15:35:59.905841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.520 qpair failed and we were unable to recover it. 00:30:50.520 [2024-07-15 15:35:59.906163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.906173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.906489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.906498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.906796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.906806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.907126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.907136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.907326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.907335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.907608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.907618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.907924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.907933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.908319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.908329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.908643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.908652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.908951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.908960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.909251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.909260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.909647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.909657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.909972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.909982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.910294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.910303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.910637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.910647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.911016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.911027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.911337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.911347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.911639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.911649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.911869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.911878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.912277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.912287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.912578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.912588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.912895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.912905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.913296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.913305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.913473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.913483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.913793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.913803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.914123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.914133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.914441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.914451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.914762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.914771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.915106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.915117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.915423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.915433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.915602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.915612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.915928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.915939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.916264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.916273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.916451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.916461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.916774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.916783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.917105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.917115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.917423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.917432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.917754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.917763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.918042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.918051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.918370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.918379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.918605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.521 [2024-07-15 15:35:59.918614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.521 qpair failed and we were unable to recover it. 00:30:50.521 [2024-07-15 15:35:59.918810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.918821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.919005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.919015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.919235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.919245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.919607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.919618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.919900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.919911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.920228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.920238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.920535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.920545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.920854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.920863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.921157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.921167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.921465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.921474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.921812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.921822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.922133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.922143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.922472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.922482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.922775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.922784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.923169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.923179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.923544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.923553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.923871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.923880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.924207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.924216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.924549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.924558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.924869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.924878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.925241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.925250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.925546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.925555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.925768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.925777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.925993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.926003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.926262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.926272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.926454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.926466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.926827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.926836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.927167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.927178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.927492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.927502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.927837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.927847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.928144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.928154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.928465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.928474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.928690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.928700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.929033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.929042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.929372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.929381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.929772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.929781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.930109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.930119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.930425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.930435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.930724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.930733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.931053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.931063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.931342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.522 [2024-07-15 15:35:59.931351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.522 qpair failed and we were unable to recover it. 00:30:50.522 [2024-07-15 15:35:59.931653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.931662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.932053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.932062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.932391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.932401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.932720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.932730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.933084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.933093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.933399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.933408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.933729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.933738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.934067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.934077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.934402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.934412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.934632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.934641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.935037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.935048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.935250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.935262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.935588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.935597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.935766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.935776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.936110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.936121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.936430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.936439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.936739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.936748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.937064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.937074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.937305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.937314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.937499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.937514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.937811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.937820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.938087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.938097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.938342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.938351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.938638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.938647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.938894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.938906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.939241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.939250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.939602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.939611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.939840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.939849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.940279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.940289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.940486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.940496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.940804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.940813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.941014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.941024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.941337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.941347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.941641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.941650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.941923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.941934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.942062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.942071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.942253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.942263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.942483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.942492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.942858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.942870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.943254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.943265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.943533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.523 [2024-07-15 15:35:59.943542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.523 qpair failed and we were unable to recover it. 00:30:50.523 [2024-07-15 15:35:59.943755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.943765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.944016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.944026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.944205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.944215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.944617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.944627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.944933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.944943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.945234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.945243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.945561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.945570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.945895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.945904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.946305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.946314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.946688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.946697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.947001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.947012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.947342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.947352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.947696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.947707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.948039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.948049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.948395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.948405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.948695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.948704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.948916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.948926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.949232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.949241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.949554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.949563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.949853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.949862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.950171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.950180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.950493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.950502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.950817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.950826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.951190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.951201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.951517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.951526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.951735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.951745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.951959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.951969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.952302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.952312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.952634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.952643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.952968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.952977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.953222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.953231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.953536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.953546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.953931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.953940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.954139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.954148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.954518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.954527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.954823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.524 [2024-07-15 15:35:59.954832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.524 qpair failed and we were unable to recover it. 00:30:50.524 [2024-07-15 15:35:59.955096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.955106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.955255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.955264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.955612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.955622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.955941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.955950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.956146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.956155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.956442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.956451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.956775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.956784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.957049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.957059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.957378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.957387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.957685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.957694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.957773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.957782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.958122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.958132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.958442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.958451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.958677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.958686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.958897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.958910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.959140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.959149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.959537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.959546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.959754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.959764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.959961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.959971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.960171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.960181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.960488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.960497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.960818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.960827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.961026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.961035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.961392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.961401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.961720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.961730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.961935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.961945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.962240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.962249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.962576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.962585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.962938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.962948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.963264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.963275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.963573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.963582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.963851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.963861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.964181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.964190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.964487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.964496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.964801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.964810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.965107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.965117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.965306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.965316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.965635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.965644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.965818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.965828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.966074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.966084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.525 [2024-07-15 15:35:59.966379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.525 [2024-07-15 15:35:59.966388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.525 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.966714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.966725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.967031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.967042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.967381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.967391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.967791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.967800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.968185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.968195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.968550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.968559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.968878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.968900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.969200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.969210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.969371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.969380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.969747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.969756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.970110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.970119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.970306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.970316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.970648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.970658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.971040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.971050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.971251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.971261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.971613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.971625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.972011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.972020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.972343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.972352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.972754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.972763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.972982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.972992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.973276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.973285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.973472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.973482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.973687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.973697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.974038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.974047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.974260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.974269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.974460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.974469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.974784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.974793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.975033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.975044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.975363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.975372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.975668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.975678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.975989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.975999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.976182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.976192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.976516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.976525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.976823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.976833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.977125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.977134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.977487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.977496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.977813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.977822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.978132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.978142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.978459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.978469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.978637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.978648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.526 [2024-07-15 15:35:59.979016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.526 [2024-07-15 15:35:59.979027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.526 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.979361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.979371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.979698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.979708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.980055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.980065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.980364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.980374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.980673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.980682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.981010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.981019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.981315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.981324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.981510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.981520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.981911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.981920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.982230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.982239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.982548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.982558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.982779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.982789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.983085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.983096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.983306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.983316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.983567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.983576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.983811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.983821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.984134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.984143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.984464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.984473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.984838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.984847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.985150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.985160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.985469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.985478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.985821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.985830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.986024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.986034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.986397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.986406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.986718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.986728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.987065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.987076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.987411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.987420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.987742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.987751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.988067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.988077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.988418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.988427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.988729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.988739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.989047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.989056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.989363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.989372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.989547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.989556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.989919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.989929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.990288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.990297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.990548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.990557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.990880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.990897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.991198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.991208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.991419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.991430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.527 [2024-07-15 15:35:59.991746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.527 [2024-07-15 15:35:59.991755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.527 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.992059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.992068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.992392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.992403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.992651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.992661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.992985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.992995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.993207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.993217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.993412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.993422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.993655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.993664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.993980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.993990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.994292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.994302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.994626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.994635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.994934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.994944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.995306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.995316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.995632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.995642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.995877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.995891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.996219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.996228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.996521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.996530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.996820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.996829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.997142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.997152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.997456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.997465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.997764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.997774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.998109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.998119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.998502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.998511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.998807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.998817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.999128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.999138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.999530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.999540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:35:59.999859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:35:59.999869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:36:00.000194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:36:00.000204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:36:00.000409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:36:00.000419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:36:00.000796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:36:00.000807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:36:00.001532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:36:00.001546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:36:00.001812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:36:00.001823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:36:00.002233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:36:00.002243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:36:00.002573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:36:00.002583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:36:00.002767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:36:00.002777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:36:00.003091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:36:00.003102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:36:00.003420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:36:00.003430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:36:00.003727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:36:00.003736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:36:00.003937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:36:00.003946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:36:00.004310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:36:00.004320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:36:00.004683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:36:00.004693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.528 [2024-07-15 15:36:00.005004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.528 [2024-07-15 15:36:00.005014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.528 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.005342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.005352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.005585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.005595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.005906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.005917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.006169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.006179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.006511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.006521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.006742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.006752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.006878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.006896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.007220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.007231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.007579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.007588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.007899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.007909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.008224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.008234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.008533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.008542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.008855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.008865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.009188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.009198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.009494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.009507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.009725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.009735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.010065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.010076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.010385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.010396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.010691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.010701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.011047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.011058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.011317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.011326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.011628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.011638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.011824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.011833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.012057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.012067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.012354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.012363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.012552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.012561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.012856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.012866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.013220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.013230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.013532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.013543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.013822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.013832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.013976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.013986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.014308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.014318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.014652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.014663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.014869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.014879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.529 [2024-07-15 15:36:00.015216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.529 [2024-07-15 15:36:00.015227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.529 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.015521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.015531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.015867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.015877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.016204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.016214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.016429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.016439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.016756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.016765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.016982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.016993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.017271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.017280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.017622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.017631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.017816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.017827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.018143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.018153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.018474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.018483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.018783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.018793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.019110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.019121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.019311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.019322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.019673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.019683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.019929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.019938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.020273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.020282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.020469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.020478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.020815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.020824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.021139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.021149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.021432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.021441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.021802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.021812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.022129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.022139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.022373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.022383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.022592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.022601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.022793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.022804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.023171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.023182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.023358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.023369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.023623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.023632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.023982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.023992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.024421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.024431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 [2024-07-15 15:36:00.024541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.530 [2024-07-15 15:36:00.024551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.530 qpair failed and we were unable to recover it. 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Write completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Write completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Write completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Write completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Write completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Write completed with error (sct=0, sc=8) 00:30:50.530 starting I/O failed 00:30:50.530 Read completed with error (sct=0, sc=8) 00:30:50.531 starting I/O failed 00:30:50.531 Read completed with error (sct=0, sc=8) 00:30:50.531 starting I/O failed 00:30:50.531 Read completed with error (sct=0, sc=8) 00:30:50.531 starting I/O failed 00:30:50.531 Read completed with error (sct=0, sc=8) 00:30:50.531 starting I/O failed 00:30:50.531 Write completed with error (sct=0, sc=8) 00:30:50.531 starting I/O failed 00:30:50.531 Write completed with error (sct=0, sc=8) 00:30:50.531 starting I/O failed 00:30:50.531 [2024-07-15 15:36:00.024770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:50.531 [2024-07-15 15:36:00.024907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.024919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.025193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.025221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.025699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.025709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.026256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.026284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.026542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.026551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.026749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.026757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.027211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.027243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.027506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.027516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.027897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.027908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.028228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.028235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.028559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.028566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.028917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.028924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.029311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.029318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.029445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.029452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.029633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.029641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.029922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.029930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.030156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.030164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.030371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.030379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.030709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.030716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.030772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.030779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.030902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.030910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.031190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.031197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.031504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.031512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.031825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.031832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.032139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.032148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.032502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.032510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.032898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.032907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.033146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.033153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.033384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.033392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.033743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.033750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.033932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.033939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.034336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.034342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.034672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.034679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.034995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.035002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.035318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.035324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.035502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.035509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.035942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.035949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.036275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.531 [2024-07-15 15:36:00.036281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.531 qpair failed and we were unable to recover it. 00:30:50.531 [2024-07-15 15:36:00.036475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.036483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.036904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.036941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.037274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.037286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.037605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.037615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.037933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.037944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.038198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.038210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.038485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.038496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.038810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.038821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.039148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.039158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.039392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.039402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.039706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.039720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.039802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.039811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.040157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.040167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.040396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.040406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.040616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.040627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.040702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.040712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.041045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.041055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.041386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.041396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.041493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.041503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.041781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.041791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.041961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.041971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.042408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.042417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.042638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.042647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.042968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.042977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.043157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.043167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.043489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.043499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.043684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.043695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.044017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.044027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.044318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.044327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.044624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.044634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.044970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.044980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.045279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.045288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.045582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.045592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.045924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.045935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.046296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.046306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.046604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.046615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.046836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.046846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.047161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.047171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.047354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.047365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.047564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.047573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.532 qpair failed and we were unable to recover it. 00:30:50.532 [2024-07-15 15:36:00.047894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.532 [2024-07-15 15:36:00.047904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.048346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.048355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.048679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.048689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.049063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.049073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.049431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.049440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.049756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.049765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.050000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.050010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.050345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.050354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.050661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.050670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.050984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.050994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.051323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.051335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.051654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.051663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.051918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.051930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.052239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.052248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.052474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.052483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.052691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.052701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.052903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.052912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.053285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.053295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.053648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.053658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.053957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.053967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.054172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.054182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.054486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.054496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.054775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.054785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.054982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.054992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.055323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.055332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.055573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.055583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.055899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.055909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.055960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.055970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.056317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.056327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.056601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.056610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.056911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.056921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.057277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.057286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.057586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.057596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.057982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.057992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.058210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.533 [2024-07-15 15:36:00.058220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.533 qpair failed and we were unable to recover it. 00:30:50.533 [2024-07-15 15:36:00.058549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.058558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.058875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.058888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.059197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.059207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.059504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.059514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.059832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.059842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.060028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.060038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.060121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.060130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.060301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.060311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.060605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.060614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.060962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.060971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.061332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.061341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.061537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.061546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.061848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.061857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.062116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.062126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.062296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.062306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.062685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.062698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.063033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.063043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.063261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.063270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.063654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.063663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.063975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.063985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.064287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.064297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.064497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.064507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.064855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.064865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.065197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.065208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.065505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.065515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.065708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.065718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.066038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.066049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.066136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.066146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.066301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.066312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.066676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.066686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.066967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.066977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.067187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.067197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.067403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.067412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.067719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.067728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.068031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.068041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.068227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.068237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.068559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.068568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.068869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.068879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.069253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.069263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.069583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.069592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.534 [2024-07-15 15:36:00.069825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.534 [2024-07-15 15:36:00.069834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.534 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.070148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.070159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.070443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.070453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.070780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.070790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.070984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.070995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.071336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.071346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.071659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.071669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.071986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.071996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.072326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.072336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.072629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.072639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.073004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.073014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.073178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.073187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.073591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.073600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.073900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.073910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.074163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.074172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.074509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.074521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.074721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.074732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.075099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.075109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.075439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.075448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.075635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.075645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.075961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.075970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.076219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.076229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.076560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.076570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.076865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.076875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.077132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.077142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.077484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.077494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.077782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.077791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.078166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.078176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.078502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.078512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.078715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.078726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.079044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.079054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.079353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.079362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.079536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.079546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.079902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.079912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.080214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.080223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.080528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.080537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.080894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.080905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.081207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.081216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.081566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.081575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.081779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.081789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.082109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.082118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.535 qpair failed and we were unable to recover it. 00:30:50.535 [2024-07-15 15:36:00.082320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.535 [2024-07-15 15:36:00.082330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.082562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.082572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.082778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.082787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.083121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.083130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.083467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.083476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.083799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.083808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.083868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.083877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.084203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.084213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.084388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.084398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.084691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.084700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.085008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.085019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.085350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.085360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.085668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.085677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.085850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.085859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.086250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.086262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.086573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.086583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.086926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.086936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.087143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.087152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.087433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.087443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.087740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.087750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.088145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.088155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.088377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.088387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.088698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.088708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.089027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.089037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.089351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.089360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.089688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.089697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.089888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.089899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.090094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.090104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.090444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.090453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.090748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.090758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.091064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.091074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.091376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.091386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.091692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.091702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.091788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.091797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.092096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.092106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.092511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.092520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.092843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.092852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.093025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.093035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.093387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.093396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.093504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.093513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.093886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.536 [2024-07-15 15:36:00.093896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.536 qpair failed and we were unable to recover it. 00:30:50.536 [2024-07-15 15:36:00.094280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.094290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.094626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.094636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.094863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.094873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.095232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.095243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.095464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.095474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.095586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.095594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.095840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.095850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.096039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.096050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.096103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.096114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.096411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.096420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.096655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.096665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.096914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.096924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.097051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.097061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.097209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.097222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.097339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.097348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.097462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.097471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.097576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.097585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.097688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.097698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.097972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.097982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.098117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.098128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.098429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.098439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.098816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.098825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.099185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.099195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.099538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.099547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.099755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.099765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.100076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.100086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.100402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.100412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.100752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.100762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.101104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.101114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.101426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.101435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.101624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.101641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.102003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.102013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.102300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.102309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.102507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.102518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.102841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.102850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.103073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.537 [2024-07-15 15:36:00.103083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.537 qpair failed and we were unable to recover it. 00:30:50.537 [2024-07-15 15:36:00.103366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.103376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.103668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.103678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.103875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.103889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.104197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.104206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.104410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.104419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.104799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.104808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.105133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.105143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.105463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.105472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.105667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.105677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.105995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.106005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.106206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.106215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.106603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.106613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.106901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.106911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.107213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.107222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.107529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.107539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.107860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.107869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.108249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.108259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.108483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.108495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.108718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.108728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.109080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.109090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.109390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.109400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.109584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.109595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.109935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.109945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.110254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.110263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.110468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.110478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.110803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.110813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.110952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.110962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.111301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.111310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.111511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.111521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.111749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.111758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.112043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.112053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.112388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.112397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.112593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.112610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.112950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.112960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.113267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.113277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.113566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.113576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.113802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.113811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.114061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.114071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.114402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.114411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.114764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.114775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.538 [2024-07-15 15:36:00.115085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.538 [2024-07-15 15:36:00.115094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.538 qpair failed and we were unable to recover it. 00:30:50.539 [2024-07-15 15:36:00.115390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.539 [2024-07-15 15:36:00.115399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.539 qpair failed and we were unable to recover it. 00:30:50.539 [2024-07-15 15:36:00.115608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.539 [2024-07-15 15:36:00.115617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.539 qpair failed and we were unable to recover it. 00:30:50.539 [2024-07-15 15:36:00.115936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.539 [2024-07-15 15:36:00.115946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.539 qpair failed and we were unable to recover it. 00:30:50.539 [2024-07-15 15:36:00.116253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.539 [2024-07-15 15:36:00.116263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.539 qpair failed and we were unable to recover it. 00:30:50.811 [2024-07-15 15:36:00.116570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.811 [2024-07-15 15:36:00.116580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.811 qpair failed and we were unable to recover it. 00:30:50.811 [2024-07-15 15:36:00.116908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.811 [2024-07-15 15:36:00.116919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.811 qpair failed and we were unable to recover it. 00:30:50.811 [2024-07-15 15:36:00.117208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.811 [2024-07-15 15:36:00.117218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.811 qpair failed and we were unable to recover it. 00:30:50.811 [2024-07-15 15:36:00.117394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.811 [2024-07-15 15:36:00.117404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.811 qpair failed and we were unable to recover it. 00:30:50.811 [2024-07-15 15:36:00.117722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.811 [2024-07-15 15:36:00.117732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.811 qpair failed and we were unable to recover it. 00:30:50.811 [2024-07-15 15:36:00.118034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.811 [2024-07-15 15:36:00.118044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.811 qpair failed and we were unable to recover it. 00:30:50.811 [2024-07-15 15:36:00.118344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.811 [2024-07-15 15:36:00.118354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.811 qpair failed and we were unable to recover it. 00:30:50.811 [2024-07-15 15:36:00.118648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.811 [2024-07-15 15:36:00.118658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.811 qpair failed and we were unable to recover it. 00:30:50.811 [2024-07-15 15:36:00.118986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.811 [2024-07-15 15:36:00.118996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.811 qpair failed and we were unable to recover it. 00:30:50.811 [2024-07-15 15:36:00.119311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.811 [2024-07-15 15:36:00.119321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.811 qpair failed and we were unable to recover it. 00:30:50.811 [2024-07-15 15:36:00.119641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.811 [2024-07-15 15:36:00.119651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.811 qpair failed and we were unable to recover it. 00:30:50.811 [2024-07-15 15:36:00.119866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.811 [2024-07-15 15:36:00.119876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.811 qpair failed and we were unable to recover it. 00:30:50.811 [2024-07-15 15:36:00.120082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.811 [2024-07-15 15:36:00.120094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.811 qpair failed and we were unable to recover it. 00:30:50.811 [2024-07-15 15:36:00.120311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.811 [2024-07-15 15:36:00.120320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.811 qpair failed and we were unable to recover it. 00:30:50.811 [2024-07-15 15:36:00.120491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.120501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.120900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.120911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.121108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.121117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.121415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.121424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.121701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.121711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.122051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.122061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.122405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.122415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.122722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.122732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.122937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.122947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.123277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.123286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.123605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.123615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.123816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.123826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.124138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.124148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.124468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.124478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.124802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.124811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.125180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.125190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.125554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.125563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.125693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.125703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.125972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.125981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.126310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.126321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.126627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.126637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.126948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.126961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.127260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.127269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.127491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.127501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.127807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.127816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.128078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.128088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.128372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.128382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.128586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.128597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.128945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.128956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.129290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.129299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.129578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.129587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.129877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.129891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.130192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.130202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.130518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.130527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.130826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.130836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.131143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.131152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.131491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.131500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.131817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.131827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.132130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.132141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.132328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.132339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.132657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.132666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.132998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.133008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.133203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.133213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.133518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.133527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.133810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.133819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.134022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.134032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.134307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.134316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.134515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.134525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.134851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.134861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.135156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.812 [2024-07-15 15:36:00.135166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.812 qpair failed and we were unable to recover it. 00:30:50.812 [2024-07-15 15:36:00.135506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.135515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.135874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.135889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.136255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.136265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.136547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.136557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.136775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.136785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.137098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.137108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.137471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.137480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.137876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.137888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.138178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.138188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.138503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.138512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.138825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.138834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.139024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.139035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.139341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.139351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.139688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.139697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.139856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.139866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.140176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.140186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.140484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.140494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.140697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.140708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.140912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.140922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.141222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.141231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.141409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.141419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.141761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.141771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.142080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.142090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.142415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.142426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.142764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.142774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.143116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.143126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.143463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.143472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.143765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.143775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.144172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.144182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.144384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.144394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.144730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.144739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.145037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.145047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.145364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.145374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.145712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.145721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.146045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.146054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.146262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.146272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.146578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.146587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.146759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.146769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.147086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.147095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.147389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.147399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.147723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.147733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.147907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.147925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.148246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.148255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.148576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.148586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.813 qpair failed and we were unable to recover it. 00:30:50.813 [2024-07-15 15:36:00.148923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.813 [2024-07-15 15:36:00.148932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.149229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.149239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.149526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.149536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.149741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.149751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.149956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.149966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.150350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.150360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.150588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.150597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.150900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.150910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.151206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.151216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.151588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.151598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.151801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.151811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.152105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.152117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.152308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.152318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.152541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.152550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.152863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.152872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.153214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.153224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.153565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.153574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.153957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.153967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.154288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.154297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.154676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.154686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.155026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.155036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.155128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.155137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.155409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.155418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.155760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.155770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.156122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.156133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.156520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.156529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.156863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.156872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.157212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.157222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.157438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.157448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.157759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.157768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.158154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.158164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.158483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.158492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.158813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.158823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.159172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.159182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.159353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.159363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.159641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.159650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.159821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.159830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.160210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.160220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.160398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.160408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.160744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.160754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.161062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.161071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.161376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.161385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.161671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.161680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.161964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.161974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.162301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.162310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.162482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.162492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.162830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.162839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.163125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.163135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.163456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.163466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.163752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.163763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.814 [2024-07-15 15:36:00.164100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.814 [2024-07-15 15:36:00.164110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.814 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.164391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.164402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.164686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.164695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.164998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.165007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.165326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.165336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.165624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.165633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.165892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.165901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.166177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.166186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.166385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.166396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.166728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.166738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.166905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.166916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.167183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.167192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.167489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.167499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.167815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.167824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.168020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.168031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.168381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.168390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.168564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.168575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.168853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.168862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.169169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.169179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.169478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.169487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.169774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.169783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.169972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.169982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.170296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.170306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.170629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.170638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.170950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.170960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.171283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.171292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.171594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.171603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.171914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.171923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.172127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.172137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.172449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.172458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.172778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.172787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.173007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.173017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.173304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.173314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.173512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.173522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.173836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.173845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.174041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.174052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.174341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.174351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.815 qpair failed and we were unable to recover it. 00:30:50.815 [2024-07-15 15:36:00.174660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.815 [2024-07-15 15:36:00.174669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.174990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.175000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.175183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.175192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.175496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.175506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.175801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.175812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.176132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.176141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.176427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.176436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.176736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.176745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.177148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.177158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.177494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.177504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.177843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.177853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.178159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.178168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.178358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.178369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.178694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.178705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.178889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.178900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.179236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.179246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.179569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.179578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.179898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.179908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.180195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.180204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.180504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.180513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.180808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.180818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.181137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.181147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.181452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.181461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.181803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.181813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.182002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.182012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.182335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.182345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.182647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.182657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.182993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.183004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.183314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.183323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.183715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.183724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.184025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.184035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.184353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.184362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.184687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.184697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.185027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.185038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.185374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.185384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.185682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.185692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.186038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.186048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.186384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.186393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.186753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.186763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.187102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.187111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.187407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.187416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.187729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.187738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.187978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.187988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.188329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.188339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.188653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.188665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.188947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.188956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.189279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.189289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.189581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.189590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.189914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.189924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.190223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.190232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.816 qpair failed and we were unable to recover it. 00:30:50.816 [2024-07-15 15:36:00.190557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.816 [2024-07-15 15:36:00.190568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.190794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.190804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.191010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.191020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.191354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.191363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.191740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.191749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.192131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.192141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.192448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.192457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.192754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.192763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.192926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.192936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.193278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.193288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.193568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.193577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.193891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.193901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.194194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.194203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.194534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.194543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.194888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.194898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.195178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.195188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.195364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.195374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.195684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.195693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.195990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.196000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.196335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.196344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.196640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.196650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.196943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.196953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.197266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.197276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.197588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.197597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.197773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.197784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.198084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.198094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.198394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.198403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.198698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.198708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.199053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.199063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.199357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.199367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.199591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.199600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.199954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.199964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.200314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.200324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.200660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.200670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.201010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.201021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.201316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.201325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.201659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.201669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.202041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.202051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.202337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.202347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.202644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.202654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.203001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.203011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.203295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.203305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.203579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.203589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.203900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.203910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.204258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.204267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.204605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.204615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.204835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.204845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.205158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.205167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.205485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.205494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.205827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.205837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.206052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.817 [2024-07-15 15:36:00.206062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.817 qpair failed and we were unable to recover it. 00:30:50.817 [2024-07-15 15:36:00.206274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.206284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.206673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.206682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.206974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.206984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.207305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.207314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.207627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.207638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.207948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.207958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.208149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.208159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.208460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.208470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.208755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.208764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.209088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.209097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.209269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.209280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.209606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.209615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.209927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.209937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.210225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.210234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.210430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.210439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.210747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.210757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.211059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.211068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.211387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.211396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.211730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.211740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.212013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.212024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.212344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.212353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.212553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.212563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.212860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.212869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.213227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.213239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.213442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.213451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.213725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.213736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.214115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.214126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.214299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.214309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.214657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.214667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.215007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.215016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.215179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.215189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.215390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.215399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.215680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.215690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.215878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.215893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.216251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.216260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.216582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.216591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.216862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.216871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.217164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.217174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.217386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.217396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.217717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.217726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.217911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.217923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.818 qpair failed and we were unable to recover it. 00:30:50.818 [2024-07-15 15:36:00.218131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.818 [2024-07-15 15:36:00.218140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.218461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.218470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.218782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.218791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.219062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.219071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.219387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.219396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.219688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.219697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.220001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.220011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.220295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.220304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.220476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.220485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.220800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.220810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.221143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.221153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.221360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.221369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.221646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.221656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.221980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.221990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.222316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.222325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.222495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.222504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.222876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.222888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.223271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.223280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.223594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.223604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.223804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.223814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.224038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.224047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.224372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.224382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.224714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.224726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.224927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.224938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.225262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.225272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.225560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.225570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.225901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.225911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.226241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.226250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.226453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.226463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.226740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.226749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.819 qpair failed and we were unable to recover it. 00:30:50.819 [2024-07-15 15:36:00.227044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.819 [2024-07-15 15:36:00.227054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.227390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.227400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.227607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.227617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.227906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.227916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.228245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.228256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.228538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.228547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.228861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.228870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.229248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.229258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.229606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.229615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.229903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.229913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.230262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.230273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.230607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.230617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.230948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.230959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.231251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.231260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.231535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.231545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.231888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.231897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.232235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.232245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.232563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.232572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.232914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.232923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.233270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.233279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.233567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.233576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.233896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.233907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.234095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.234104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.234399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.234408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.234704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.234714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.235015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.235025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.235312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.235322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.235634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.235643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.235959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.235969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.236249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.236259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.236564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.236574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.236782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.236793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.237007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.237020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.237216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.237226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.237573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.237582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.237939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.237950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.238010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.238020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.238188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.238199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.238525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.238534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.238624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.238634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.238824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.238834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.239210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.239220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.239513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.239522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.239843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.239852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.240145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.240154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.240460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.240470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.240812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.240821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.241112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.241123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.241361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.241371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.241709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.241719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.820 qpair failed and we were unable to recover it. 00:30:50.820 [2024-07-15 15:36:00.241944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.820 [2024-07-15 15:36:00.241954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.242179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.242188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.242567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.242576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.242913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.242924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.243291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.243300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.243620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.243629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.243939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.243949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.244290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.244299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.244547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.244557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.244879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.244894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.245202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.245211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.245503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.245512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.245835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.245844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.246133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.246143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.246440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.246450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.246800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.246810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.247020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.247029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.247353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.247363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.247719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.247729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.247983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.247994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.248330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.248339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.248644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.248654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.248946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.248958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.249345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.249354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.249674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.249683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.249983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.249993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.250300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.250309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.250642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.250652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.250965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.250976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.251207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.251217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.251507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.251516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.251688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.251698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.252072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.252082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.252401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.252410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.252658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.252667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.252977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.252987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.253301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.253311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.253625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.253635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.253954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.253964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.254309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.254319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.254648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.254658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.255001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.255011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.255193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.255203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.255524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.255534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.255854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.255863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.256199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.256208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.256507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.256516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.256797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.256807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.257008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.257018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.257252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.257262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.257466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.821 [2024-07-15 15:36:00.257475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.821 qpair failed and we were unable to recover it. 00:30:50.821 [2024-07-15 15:36:00.257798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.257807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.258121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.258132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.258500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.258510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.258854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.258864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.259206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.259216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.259546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.259556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.259745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.259755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.259954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.259964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.260198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.260207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.260551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.260561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.260874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.260887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.261169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.261180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.261478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.261487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.261771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.261780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.261964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.261975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.262340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.262349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.262660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.262669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.262899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.262909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.263204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.263214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.263549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.263558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.263751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.263760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.264096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.264105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.264500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.264509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.264850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.264859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.265240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.265251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.265410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.265421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.265809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.265819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.266144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.266154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.266327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.266337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.266665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.266675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.266880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.266894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.267087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.267097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.267382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.267391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.267696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.267705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.267866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.267876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.268164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.268174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.268494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.268504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.268838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.268848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.269207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.269217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.269600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.269609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.269909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.269919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.270226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.270236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.270563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.270572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.270952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.822 [2024-07-15 15:36:00.270961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.822 qpair failed and we were unable to recover it. 00:30:50.822 [2024-07-15 15:36:00.271270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.271279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.271478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.271488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.271738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.271748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.272055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.272065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.272370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.272379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.272526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.272536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.272856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.272866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.273194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.273206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.273553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.273562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.273871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.273880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.274094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.274104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.274431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.274440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.274585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.274596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.274986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.274995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.275318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.275327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.275626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.275636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.275825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.275836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.276131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.276141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.276290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.276299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.276641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.276650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.276844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.276854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.277213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.277223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.277539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.277549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.277855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.277864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.278190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.278200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.278511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.278521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.278910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.278920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.279192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.279202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.279404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.279414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.279620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.279630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.279932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.279942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.280116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.280127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.280440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.280449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.280768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.280777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.281100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.281110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.281411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.281420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.281747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.281756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.282074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.282084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.282308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.282317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.282632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.282642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.282947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.282957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.283278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.283288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.283606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.283617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.283934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.283943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.284262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.284271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.284573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.284582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.284633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.284643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.284992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.285004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.285316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.285325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.285539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.285549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.285871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.823 [2024-07-15 15:36:00.285880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.823 qpair failed and we were unable to recover it. 00:30:50.823 [2024-07-15 15:36:00.286264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.286273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.286612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.286621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.286804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.286814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.287163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.287173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.287481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.287490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.287780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.287790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.288105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.288115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.288299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.288310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.288655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.288665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.288979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.288989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.289281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.289291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.289599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.289608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.289922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.289932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.290329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.290338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.290629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.290638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.290872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.290882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.291209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.291219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.291520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.291530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.291865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.291874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.292159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.292168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.292598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.292608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.292898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.292907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.293156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.293165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.293472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.293481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.293797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.293808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.294114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.294125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.294471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.294481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.294812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.294822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.294903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.294913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.295216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.295226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.295560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.295569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.295896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.295906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.296293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.296303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.296632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.296642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.296925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.296936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.297166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.297176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.297564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.297574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.297798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.297807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.298120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.298130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.298433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.298443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.298753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.298763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.299107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.299118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.299419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.299428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.299723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.299733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.299922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.299931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.300180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.300189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.300505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.300516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.300857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.300867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.301195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.301205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.301504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.301514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.824 [2024-07-15 15:36:00.301799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.824 [2024-07-15 15:36:00.301809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.824 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.302137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.302147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.302459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.302468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.302771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.302780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.302990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.303007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.303319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.303329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.303678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.303687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.304007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.304016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.304343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.304352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.304736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.304746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.305080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.305089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.305370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.305379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.305694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.305704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.306025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.306037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.306356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.306366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.306588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.306598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.306835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.306845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.307145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.307155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.307482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.307492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.307830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.307839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.308201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.308211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.308541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.308550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.308860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.308870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.309176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.309186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.309483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.309493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.309830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.309840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.310136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.310146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.310459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.310468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.310857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.310866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.311172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.311182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.311416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.311426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.311601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.311610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.311937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.311947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.312284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.312294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.312603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.312613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.312916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.312926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.313145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.313156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.313445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.313454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.313804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.313814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.314039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.314049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.314412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.314421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.314586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.314596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.314974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.314984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.315306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.315315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.315613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.315624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.315833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.315843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.316142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.316152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.316434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.316444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.316628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.316639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.316953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.316963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.317188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.317197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.825 [2024-07-15 15:36:00.317539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.825 [2024-07-15 15:36:00.317549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.825 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.317889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.317899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.318296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.318307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.318514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.318523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.318787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.318796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.319185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.319195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.319493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.319502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.319825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.319834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.320141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.320151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.320468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.320478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.320854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.320864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.321271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.321281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.321601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.321611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.321945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.321955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.322256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.322265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.322652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.322661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.322852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.322862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.323178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.323188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.323498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.323507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.323736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.323747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.324035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.324044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.324192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.324201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.324503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.324512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.324807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.324816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.325003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.325014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.325257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.325266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.325540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.325549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.325859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.325869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.326275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.326285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.326575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.326584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.326877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.326891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.327188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.327197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.327499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.327509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.327805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.327815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.328186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.328197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.328516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.328525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.328829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.328838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.329134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.329144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.329476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.329485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.329808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.329817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.330127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.330137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.330474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.330484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.330827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.330839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.331158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.331168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.331502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.331512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.826 [2024-07-15 15:36:00.331717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.826 [2024-07-15 15:36:00.331727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.826 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.332025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.332036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.332353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.332363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.332713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.332723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.333064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.333074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.333383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.333392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.333565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.333575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.333906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.333916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.334218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.334228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.334563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.334572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.334894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.334904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.335224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.335233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.335507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.335516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.335930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.335940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.336259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.336269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.336585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.336595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.336890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.336901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.337221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.337231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.337507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.337517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.337819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.337828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.338023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.338033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.338223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.338233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.338712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.338742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.339119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.339146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.339452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.339460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.339795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.339801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.340209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.340236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.340454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.340463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.340642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.340652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.340964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.340971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.341150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.341159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.341749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.341836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.342440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.342527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.343123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.343209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2718000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.343567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.343576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.343887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.343894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.344118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.344126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.344318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.344327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.344664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.344670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.344892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.344900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.345247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.345253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.345566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.345573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.345921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.345928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.346274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.346281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.346465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.346472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.346855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.346862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.347221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.347229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.347538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.347545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.347881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.347892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.348175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.348182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.348478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.827 [2024-07-15 15:36:00.348485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.827 qpair failed and we were unable to recover it. 00:30:50.827 [2024-07-15 15:36:00.348816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.348824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.349139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.349146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.349304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.349312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.349585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.349592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.349895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.349902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.350208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.350214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.350466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.350473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.350807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.350814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.351034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.351042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.351364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.351371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.351710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.351717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.351934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.351941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.352111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.352118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.352431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.352437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.352592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.352599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.352962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.352970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.353279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.353285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.353639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.353646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.353951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.353958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.354283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.354289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.354592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.354599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.354899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.354906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.355247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.355254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.355579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.355586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.355902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.355910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.356211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.356217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.356514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.356522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.356824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.356831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.357157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.357165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.357332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.357339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.357654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.357662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.358006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.358013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.358356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.358362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.358645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.358652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.358980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.358988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.359196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.359203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.359389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.359396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.359707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.359714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.360050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.360057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.360238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.360244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.360480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.360487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.360785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.360792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.361054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.361061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.361398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.361404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.361735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.361742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.362043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.362051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.362373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.362381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.362631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.362638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.362934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.362940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.363248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.363254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.363444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.828 [2024-07-15 15:36:00.363451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.828 qpair failed and we were unable to recover it. 00:30:50.828 [2024-07-15 15:36:00.363757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.363765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.364059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.364066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.364454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.364460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.364625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.364633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.364956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.364962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.365273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.365279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.365480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.365488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.365765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.365772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.366065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.366072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.366333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.366339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.366759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.366765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.367072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.367078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.367232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.367240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.367597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.367605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.367897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.367905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.368203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.368212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.368537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.368544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.368863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.368870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.369193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.369200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.369372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.369379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.369735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.369742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.370033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.370040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.370338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.370344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.370655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.370662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.370962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.370969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.371301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.371308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.371630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.371638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.371912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.371919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.372248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.372254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.372422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.372430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.372752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.372759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.373068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.373074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.373403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.373410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.373638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.373645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.373836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.373843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.374117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.374124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.374457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.374463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.374634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.374642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.375025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.375032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.375324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.375331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.375542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.375549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.375867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.375874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.376253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.376261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.376557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.376564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.829 [2024-07-15 15:36:00.376726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.829 [2024-07-15 15:36:00.376734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.829 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.376906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.376913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.377202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.377209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.377507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.377513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.377838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.377845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.378140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.378146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.378459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.378466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.378781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.378788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.379098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.379105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.379418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.379425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.379706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.379713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.380027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.380037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.380365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.380373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.380677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.380685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.381040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.381047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.381424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.381430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.381738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.381745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.382061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.382068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.382369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.382376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.382677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.382683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.383006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.383012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.383331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.383338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.383656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.383664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.383979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.383986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.384344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.384351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.384610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.384616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.384914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.384921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.385252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.385258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.385580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.385586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.385897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.385904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.386229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.386236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.386430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.386438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.386726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.386734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.387053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.387060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.387358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.387365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.387668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.387675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.387956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.387964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.388285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.388291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.388603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.388609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.388896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.388903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.389232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.389241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.389564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.389571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.389760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.389768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.390100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.390107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.390401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.390407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.390628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.390635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.390960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.390967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.391281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.391288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.391605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.391612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.391933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.391940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.392151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.392159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.830 [2024-07-15 15:36:00.392453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.830 [2024-07-15 15:36:00.392461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.830 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.392770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.392777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.393077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.393084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.393383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.393390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.393565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.393572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.393793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.393800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.394129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.394136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.394474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.394480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.394780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.394786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.395092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.395099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.395444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.395450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.395757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.395763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.395971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.395978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.396329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.396338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.396514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.396521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.396825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.396832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.397185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.397191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.397485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.397492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.397832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.397839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.398192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.398199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.398508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.398515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.398697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.398705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.398954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.398961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.399263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.399269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.399441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.399448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.399767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.399774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.400054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.400061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.400381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.400389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.400696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.400703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.400946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.400953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.401312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.401319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.401612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.401619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.401970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.401976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.402278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.402285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.402611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.402617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.402929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.402936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.403158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.403166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.403494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.403501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.403810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.403817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.404008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.404015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.404334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.404343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.404648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.404654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.404955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.404962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.405324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.405331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.831 [2024-07-15 15:36:00.405668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.831 [2024-07-15 15:36:00.405675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.831 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.406007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.406014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.406301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.406309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.406526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.406533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.406851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.406859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.407103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.407111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.407429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.407436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.407730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.407737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.407952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.407967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.408185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.408192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.408497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.408503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.408794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.408802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.409203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.409210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.409382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.409390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.409685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.409691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.410022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.410029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.410334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.410340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.410546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.410553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.410835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.410841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.410979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.410986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.411322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.411360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.411718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.411730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.412076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.412090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.412472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.412480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.412656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.412664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.412874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.412881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.413122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.413129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.413511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.413518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.413828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.413835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.414218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.414224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.414539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.414546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.414836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.414843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.414965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.414972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.415300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.415307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.415517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.415524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.415839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.415847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.416152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.416161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.416476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.832 [2024-07-15 15:36:00.416483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.832 qpair failed and we were unable to recover it. 00:30:50.832 [2024-07-15 15:36:00.416810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.833 [2024-07-15 15:36:00.416817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.833 qpair failed and we were unable to recover it. 00:30:50.833 [2024-07-15 15:36:00.417124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.833 [2024-07-15 15:36:00.417131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.833 qpair failed and we were unable to recover it. 00:30:50.833 [2024-07-15 15:36:00.417432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.833 [2024-07-15 15:36:00.417439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.833 qpair failed and we were unable to recover it. 00:30:50.833 [2024-07-15 15:36:00.417733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.833 [2024-07-15 15:36:00.417741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.833 qpair failed and we were unable to recover it. 00:30:50.833 [2024-07-15 15:36:00.418066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.833 [2024-07-15 15:36:00.418073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.833 qpair failed and we were unable to recover it. 00:30:50.833 [2024-07-15 15:36:00.418255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.833 [2024-07-15 15:36:00.418263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.833 qpair failed and we were unable to recover it. 00:30:50.833 [2024-07-15 15:36:00.418534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.833 [2024-07-15 15:36:00.418541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.833 qpair failed and we were unable to recover it. 00:30:50.833 [2024-07-15 15:36:00.418839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.833 [2024-07-15 15:36:00.418846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.833 qpair failed and we were unable to recover it. 00:30:50.833 [2024-07-15 15:36:00.419095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.833 [2024-07-15 15:36:00.419102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.833 qpair failed and we were unable to recover it. 00:30:50.833 [2024-07-15 15:36:00.419391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.833 [2024-07-15 15:36:00.419397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.833 qpair failed and we were unable to recover it. 00:30:50.833 [2024-07-15 15:36:00.419724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.833 [2024-07-15 15:36:00.419731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:50.833 qpair failed and we were unable to recover it. 00:30:51.109 [2024-07-15 15:36:00.419926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.109 [2024-07-15 15:36:00.419934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.109 qpair failed and we were unable to recover it. 00:30:51.109 [2024-07-15 15:36:00.420314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.109 [2024-07-15 15:36:00.420321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.109 qpair failed and we were unable to recover it. 00:30:51.109 [2024-07-15 15:36:00.420519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.109 [2024-07-15 15:36:00.420526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.109 qpair failed and we were unable to recover it. 00:30:51.109 [2024-07-15 15:36:00.420863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.109 [2024-07-15 15:36:00.420869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.109 qpair failed and we were unable to recover it. 00:30:51.109 [2024-07-15 15:36:00.421298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.109 [2024-07-15 15:36:00.421306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.109 qpair failed and we were unable to recover it. 00:30:51.109 [2024-07-15 15:36:00.421621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.109 [2024-07-15 15:36:00.421628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.109 qpair failed and we were unable to recover it. 00:30:51.109 [2024-07-15 15:36:00.421823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.109 [2024-07-15 15:36:00.421829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.109 qpair failed and we were unable to recover it. 00:30:51.109 [2024-07-15 15:36:00.422038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.109 [2024-07-15 15:36:00.422045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.109 qpair failed and we were unable to recover it. 00:30:51.109 [2024-07-15 15:36:00.422355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.109 [2024-07-15 15:36:00.422362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.109 qpair failed and we were unable to recover it. 00:30:51.109 [2024-07-15 15:36:00.422670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.109 [2024-07-15 15:36:00.422676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.109 qpair failed and we were unable to recover it. 00:30:51.109 [2024-07-15 15:36:00.422971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.109 [2024-07-15 15:36:00.422979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.109 qpair failed and we were unable to recover it. 00:30:51.109 [2024-07-15 15:36:00.423295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.109 [2024-07-15 15:36:00.423302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.109 qpair failed and we were unable to recover it. 00:30:51.109 [2024-07-15 15:36:00.423613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.109 [2024-07-15 15:36:00.423620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.109 qpair failed and we were unable to recover it. 00:30:51.109 [2024-07-15 15:36:00.423797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.109 [2024-07-15 15:36:00.423805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.109 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.424192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.424199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.424499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.424505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.424822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.424829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.425161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.425167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.425370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.425378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.425700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.425707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.425888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.425896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.426172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.426179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.426502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.426509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.426692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.426699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.427064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.427071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.427376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.427382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.427704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.427711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.428041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.428050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.428386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.428393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.428768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.428775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.429098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.429104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.429407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.429414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.429589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.429595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.429957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.429964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.430295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.430302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.430625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.430632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.430946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.430954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.431286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.431294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.431492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.431498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.431799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.431806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.432104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.432112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.432422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.432429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.432741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.432748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.433064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.433071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.433382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.433389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.433704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.433710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.433887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.433895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.434080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.434087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.434277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.434284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.434483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.434489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.434670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.434678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.435004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.435011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.110 [2024-07-15 15:36:00.435323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.110 [2024-07-15 15:36:00.435330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.110 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.435623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.435629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.435928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.435936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.436234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.436241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.436568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.436575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.436814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.436821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.437146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.437153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.437429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.437436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.437609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.437616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.437897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.437904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.438260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.438267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.438602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.438608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.438922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.438928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.439350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.439357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.439669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.439675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.440047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.440054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.440373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.440380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.440588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.440595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.440917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.440924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.441105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.441112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.441278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.441285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.441554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.441561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.441906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.441912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.442111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.442119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.442269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.442276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.442603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.442609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.442913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.442920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.443227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.443234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.443397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.443404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.443659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.443665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.443998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.444005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.444333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.444339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.444671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.444678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.444985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.444993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.445186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.445193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.445506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.445513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.445863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.445869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.446031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.446038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.446369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.446375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.111 [2024-07-15 15:36:00.446698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.111 [2024-07-15 15:36:00.446705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.111 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.447009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.447015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.447227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.447235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.447434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.447441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.447482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.447488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.447637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.447644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.447968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.447975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.448307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.448313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.448637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.448644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.448946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.448953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.449281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.449289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.449605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.449612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.449962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.449968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.450298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.450305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.450627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.450634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.450944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.450952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.451258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.451265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.451571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.451578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.451934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.451941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.452235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.452241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.452535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.452541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.452839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.452846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.453013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.453021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.453308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.453314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.453629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.453635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.453939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.453946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.454260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.454267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.454558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.454565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.454860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.454868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.455223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.455230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.455439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.455447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.455787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.455793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.456110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.456117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.456441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.456447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.456762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.456769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.457071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.457078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.457406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.457412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.457574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.457581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.457874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.457881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.112 [2024-07-15 15:36:00.458234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.112 [2024-07-15 15:36:00.458241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.112 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.458534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.458541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.458867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.458874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.459204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.459211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.459551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.459559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.459915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.459923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.460240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.460247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.460557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.460564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.460766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.460774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.460979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.460986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.461365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.461372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.461691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.461697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.462038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.462044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.462150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.462156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.462478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.462485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.462695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.462702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.463017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.463024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.463340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.463347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.463664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.463671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.463974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.463981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.464169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.464176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.464488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.464494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.464710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.464717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.464921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.464928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.465206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.465213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.465437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.465444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.465729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.465735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.466062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.466069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.466387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.466394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.466587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.466593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.466848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.466854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.467195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.467201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.467521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.467528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.467857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.467863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.468311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.468318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.468505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.468512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.468820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.468827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.469153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.469160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.113 [2024-07-15 15:36:00.469473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.113 [2024-07-15 15:36:00.469480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.113 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.469775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.469781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.470101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.470109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.470439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.470446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.470766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.470773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.471072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.471079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.471381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.471390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.471715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.471722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.471945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.471952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.472283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.472290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.472579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.472586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.472879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.472887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.473205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.473212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.473574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.473580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.473882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.473895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.474187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.474194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.474604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.474611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.474912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.474920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.475117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.475124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.475428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.475435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.475652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.475659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.475940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.475947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.476264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.476271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.476560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.476566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.476715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.476722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.477089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.477096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.477382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.477389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.477662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.477668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.478051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.478057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.478284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.478290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.478635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.478642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.478958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.478966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.479306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.479313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.479501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.479508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.479816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.479823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.480176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.114 [2024-07-15 15:36:00.480183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.114 qpair failed and we were unable to recover it. 00:30:51.114 [2024-07-15 15:36:00.480575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.480582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.480877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.480894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.481102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.481109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.481451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.481458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.481770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.481777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.482070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.482077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.482378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.482386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.482749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.482757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.483078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.483085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.483285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.483292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.483631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.483639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.483855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.483861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.484174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.484181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.484485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.484492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.484791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.484797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.485109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.485116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.485350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.485357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.485670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.485677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.485972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.485978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.486295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.486301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.486611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.486618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.486937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.486943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.487254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.487261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.487442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.487450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.487726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.487734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.488028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.488035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.488336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.488344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.488642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.488649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.488966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.488974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.489395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.489403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.489698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.489705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.489915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.489922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.490282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.490289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.490594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.490600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.490803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.490810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.491133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.491140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.491344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.491351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.491659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.491667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.491961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.491968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.492294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.115 [2024-07-15 15:36:00.492301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.115 qpair failed and we were unable to recover it. 00:30:51.115 [2024-07-15 15:36:00.492630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.492637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.492807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.492814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.493078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.493086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.493367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.493373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.493429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.493435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.493652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.493659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.493975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.493983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.494283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.494290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.494590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.494597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.494939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.494945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.495170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.495178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.495521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.495527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.495815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.495822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.496038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.496044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.496330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.496337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.496666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.496673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.496825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.496833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.497173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.497180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.497525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.497532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.497831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.497839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.498158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.498166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.498442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.498449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.498742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.498749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.499060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.499067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.499393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.499400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.499581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.499589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.499801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.499808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.500123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.500131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.500330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.500337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.500660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.500667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.500939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.500946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.501112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.501118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.501396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.501402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.501708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.501715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.501922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.501929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.502127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.502134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.502518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.502525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.502817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.502823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.503174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.503181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.503336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.503343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.503713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.116 [2024-07-15 15:36:00.503719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.116 qpair failed and we were unable to recover it. 00:30:51.116 [2024-07-15 15:36:00.504035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.504042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.504371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.504378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.504673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.504679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.504964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.504971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.505159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.505165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.505516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.505523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.505823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.505830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.506137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.506144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.506471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.506478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.506688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.506696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.506932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.506939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.507329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.507336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.507627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.507634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.507923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.507929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.508234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.508240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.508402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.508409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.508785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.508792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.509095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.509102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.509388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.509394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.509706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.509713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.509909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.509916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.510251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.510258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.510623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.510630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.510916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.510923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.511283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.511289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.511628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.511635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.511923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.511929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.512273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.512280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.512548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.512555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.512901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.512908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.513246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.513252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.513543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.513550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.513842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.513848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.514159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.514166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.514461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.514467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.514771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.514778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.515084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.515090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.515402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.515409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.515625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.515632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.515939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.515945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.117 qpair failed and we were unable to recover it. 00:30:51.117 [2024-07-15 15:36:00.516228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.117 [2024-07-15 15:36:00.516236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.516539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.516546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.516714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.516721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.517071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.517078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.517238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.517246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.517560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.517566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.517929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.517936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.518247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.518255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.518584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.518591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.518920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.518928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.519218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.519225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.519523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.519530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.519714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.519721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.519892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.519900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.520104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.520112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.520300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.520307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.520623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.520630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.520964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.520971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.521291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.521298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.521597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.521604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.521938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.521945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.522306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.522312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.522639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.522646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.522955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.522962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.523271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.523278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.523588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.523594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.523888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.523895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.524194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.524201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.524517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.524524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.524842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.524848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.525138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.525145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.525444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.525452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.525754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.525761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.526061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.526068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.526408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.118 [2024-07-15 15:36:00.526415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.118 qpair failed and we were unable to recover it. 00:30:51.118 [2024-07-15 15:36:00.526700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.526707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.527041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.527048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.527363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.527370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.527704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.527710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.527978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.527985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.528305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.528311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.528679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.528687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.528984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.528992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.529300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.529308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.529623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.529630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.529965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.529972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.530268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.530275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.530575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.530581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.530751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.530758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.531034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.531043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.531205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.531212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.531555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.531562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.531856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.531862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.532161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.532168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.532518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.532525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.532852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.532861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.533168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.533176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.533512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.533518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.533832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.533838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.534158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.534165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.534336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.534343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.534620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.534628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.534971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.534978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.535087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.535094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.535376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.535382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.535676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.535683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.535969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.535975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.536303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.536309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.536623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.536629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.536922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.536930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.537241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.537247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.537529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.537536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.537825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.537832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.538127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.538134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.538434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.538440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.119 [2024-07-15 15:36:00.538744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.119 [2024-07-15 15:36:00.538750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.119 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.539071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.539078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.539374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.539380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.539569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.539577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.539885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.539892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.540204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.540211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.540504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.540512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.540831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.540837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.541022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.541031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.541416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.541422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.541763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.541770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.541842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.541848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.542194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.542201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.542266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.542272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.542627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.542635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.542931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.542938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.543242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.543249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.543548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.543555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.543743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.543751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.544084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.544091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.544397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.544404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.544571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.544578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.544901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.544908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.545254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.545260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.545561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.545568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.545782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.545789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.546095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.546102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.546260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.546268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.546603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.546609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.546901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.546907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.547323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.547329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.547623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.547630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.547796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.547803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.548152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.548159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.548457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.548464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.548764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.548771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.549098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.549105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.549410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.549417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.549602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.549610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.549945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.549952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.550148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.120 [2024-07-15 15:36:00.550156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.120 qpair failed and we were unable to recover it. 00:30:51.120 [2024-07-15 15:36:00.550506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.550513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.550704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.550711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.551075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.551082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.551278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.551286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.551621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.551628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.551786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.551793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.552179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.552186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.552479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.552485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.552805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.552811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.552977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.552984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.553357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.553363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.553664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.553670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.553964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.553972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.554274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.554282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.554669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.554675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.555033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.555040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.555372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.555379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.555695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.555702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.556047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.556055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.556397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.556403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.556595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.556602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.556927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.556934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.557236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.557242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.557539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.557545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.557856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.557863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.558154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.558161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.558480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.558486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.558794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.558801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.559111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.559118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.559463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.559470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.559760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.559767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.560078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.560085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.560276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.560284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.560603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.560609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.560935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.560942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.561151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.561157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.561492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.561498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.561792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.561798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.562087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.562094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.562431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.562437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.121 qpair failed and we were unable to recover it. 00:30:51.121 [2024-07-15 15:36:00.562732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.121 [2024-07-15 15:36:00.562739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.563056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.563063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.563215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.563222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.563507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.563513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.563811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.563817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.564126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.564134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.564487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.564495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.564656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.564663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.564991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.564998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.565300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.565306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.565646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.565652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.565864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.565871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.566096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.566103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.566392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.566400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.566692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.566698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.566863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.566870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.567266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.567272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.567452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.567459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.567824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.567831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.568150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.568157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.568315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.568322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.568614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.568621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.568924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.568931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.569136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.569142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.569321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.122 [2024-07-15 15:36:00.569327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.122 qpair failed and we were unable to recover it. 00:30:51.122 [2024-07-15 15:36:00.569612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.569619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.569941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.569948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.570263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.570269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.570578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.570585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.570751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.570758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.571147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.571154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.571546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.571553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.571873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.571880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.572195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.572202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.572529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.572537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.572742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.572749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.573027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.573034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.573346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.573353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.573664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.573671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.573971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.573977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.574290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.574297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.574602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.574610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.574799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.574806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.575013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.575020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.575377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.575384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.575550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.575557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.575731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.575738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.576089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.576096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.576387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.123 [2024-07-15 15:36:00.576394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.123 qpair failed and we were unable to recover it. 00:30:51.123 [2024-07-15 15:36:00.576708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.576715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.577026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.577034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.577334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.577341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.577658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.577664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.578044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.578053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.578343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.578349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.578646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.578653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.578837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.578844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.579166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.579172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.579471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.579478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.579692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.579699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.580042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.580048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.580341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.580347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.580660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.580667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.580957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.580965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.581311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.581318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.581609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.581616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.581903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.581910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.582242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.582250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.582550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.582556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.582750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.582756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.583050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.583057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.583354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.583360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.583680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.583686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.584001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.584008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.584335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.584341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.584676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.584683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.585003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.585011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.585179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.585187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.585488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.585495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.585785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.585792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.586087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.586095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.586292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.586299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.586687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.586693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.586999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.587007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.587328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.587335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.587627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.587633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.587946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.587955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.588296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.588302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.588593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.588600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.588911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.124 [2024-07-15 15:36:00.588919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.124 qpair failed and we were unable to recover it. 00:30:51.124 [2024-07-15 15:36:00.589266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.589273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.589461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.589468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.589791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.589797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.590015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.590023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.590335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.590341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.590641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.590647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.590939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.590946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.591260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.591266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.591542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.591549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.591891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.591898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.592222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.592228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.592540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.592547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.592768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.592775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.593075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.593081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.593376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.593383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.593679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.593686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.593998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.594005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.594299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.594306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.594633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.594640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.594830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.594837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.595136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.595143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.595421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.595428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.595631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.595638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.595909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.595916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.596232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.596239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.596435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.596443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.596848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.596855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.597087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.597095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.597398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.597405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.597715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.597722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.598035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.598043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.598349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.598355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.598744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.598751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.598975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.598983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.599271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.599277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.599497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.599504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.599728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.599735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.600058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.600065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.600254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.600261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.600618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.125 [2024-07-15 15:36:00.600625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.125 qpair failed and we were unable to recover it. 00:30:51.125 [2024-07-15 15:36:00.600964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.600972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.601151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.601158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.601486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.601493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.601807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.601814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.602069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.602077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.602379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.602386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.602693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.602700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.602907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.602914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.603180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.603187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.603503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.603509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.603829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.603836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.604029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.604036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.604347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.604354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.604646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.604652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.604952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.604960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.605255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.605261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.605551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.605557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.605869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.605876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.606091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.606098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.606448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.606454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.606665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.606671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.606890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.606897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.607224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.607231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.607442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.607448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.607656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.607663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.608050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.608057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.608249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.608255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.608614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.608621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.608962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.608968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.609290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.609296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.609616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.609625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.609931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.609938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.610217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.610224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.610549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.610556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.610888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.610895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.610933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.610940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.611248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.611254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.126 [2024-07-15 15:36:00.611460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.126 [2024-07-15 15:36:00.611468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.126 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.611656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.611662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.611951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.611957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.612291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.612298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.612601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.612607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.612895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.612902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.613262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.613269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.613573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.613580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.613872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.613879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.614226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.614233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.614548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.614556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.614895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.614902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.615211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.615218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.615507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.615514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.615684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.615692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.615940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.615947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.616294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.616301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.616597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.616604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.616755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.616762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.617036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.617043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.617347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.617353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.617642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.617649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.617934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.617941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.618260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.618266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.618457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.618463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.618810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.618817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.619137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.619144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.619536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.619542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.619829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.619835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.620187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.620194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.620468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.620474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.620737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.620745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.621061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.621067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.621373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.127 [2024-07-15 15:36:00.621381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.127 qpair failed and we were unable to recover it. 00:30:51.127 [2024-07-15 15:36:00.621698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.621704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.621983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.621990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.622305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.622312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.622600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.622607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.622925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.622932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.623234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.623240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.623540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.623547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.623844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.623851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.624069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.624076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.624266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.624272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.624591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.624598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.624895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.624902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.625199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.625205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.625369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.625376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.625652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.625659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.625913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.625920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.626125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.626131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.626504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.626510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.626896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.626902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.627098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.627106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.627434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.627440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.627616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.627623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.627959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.627966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.628160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.628167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.628404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.628411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.628683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.628689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.628889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.628895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.629149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.629155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.629464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.629471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.629790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.629797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.629914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.629922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.630118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.630124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.630288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.630294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.630647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.630655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.630950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.630956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.631273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.631280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.631589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.631595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.631947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.631953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.632295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.632301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.632496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.632505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.632818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.128 [2024-07-15 15:36:00.632825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.128 qpair failed and we were unable to recover it. 00:30:51.128 [2024-07-15 15:36:00.633129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.633135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.633337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.633344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.633650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.633656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.633968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.633975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.634275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.634282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.634576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.634582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.634865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.634872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.635165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.635172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.635486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.635492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.635799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.635805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.636205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.636213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.636534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.636541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.636820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.636827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.637134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.637140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.637331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.637338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.637683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.637689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.637984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.637991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.638294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.638300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.638475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.638482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.638708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.638715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.638861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.638868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.639153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.639159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.639453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.639459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.639657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.639664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.639964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.639971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.640275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.640282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.640570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.640577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.640793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.640800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.641122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.641129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.641428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.641435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.641649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.641656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.641986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.641992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.642329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.642335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.642692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.642698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.642912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.642919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.643213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.643220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.643536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.643542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.643855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.643862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.644033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.644042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.644323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.644330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.129 qpair failed and we were unable to recover it. 00:30:51.129 [2024-07-15 15:36:00.644516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.129 [2024-07-15 15:36:00.644523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.644823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.644829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.645024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.645032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.645513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.645549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.645850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.645861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.646281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.646317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.646626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.646633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.646818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.646825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.647161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.647168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.647498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.647505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.647809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.647815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.648134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.648141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.648514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.648520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.648809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.648815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.649147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.649153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.649460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.649467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.649790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.649797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.650113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.650121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.650438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.650444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.650767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.650774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.651092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.651098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.651443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.651449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.651753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.651760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.652154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.652161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.652485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.652491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.652809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.652816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.653157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.653165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.653470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.653477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.653809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.653815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.654186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.654193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.654491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.654498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.654680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.654687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.654978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.654985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.655304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.655311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.655624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.655631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.655924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.655930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.656253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.656259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.656662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.656668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.656865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.656873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.657209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.657216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.130 qpair failed and we were unable to recover it. 00:30:51.130 [2024-07-15 15:36:00.657510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.130 [2024-07-15 15:36:00.657517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.657843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.657850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.658160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.658167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.658477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.658483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.658786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.658793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.659098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.659105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.659399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.659406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.659839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.659845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.660077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.660084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.660251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.660258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.660411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.660418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.660612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.660619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.660962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.660969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.661272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.661279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.661581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.661587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.661922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.661929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.662123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.662129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.662428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.662435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.662810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.662817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.663127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.663134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.663327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.663335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.663721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.663728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.664039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.664046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.664340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.664346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.664722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.664728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.665036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.665043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.665360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.665366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.665687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.665694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.666010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.666018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.666325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.666332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.666569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.666576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.666882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.666892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.667177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.667183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.667479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.667485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.667790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.667796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.668117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.668123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.668439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.668446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.668649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.668655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.669034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.669043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.669346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.669353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.131 qpair failed and we were unable to recover it. 00:30:51.131 [2024-07-15 15:36:00.669678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.131 [2024-07-15 15:36:00.669685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.669999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.670007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.670340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.670348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.670572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.670579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.670799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.670806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.671107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.671114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.671320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.671326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.671621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.671628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.671922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.671929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.672212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.672218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.672421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.672429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.672747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.672754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.673062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.673069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.673378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.673384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.673699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.673705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.673882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.673891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.674222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.674229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.674523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.674530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.674819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.674826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.675139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.675146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.675539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.675545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.675664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.675671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.675949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.675955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.676305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.676312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.676513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.676520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.676833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.676840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.677142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.677150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.677464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.677471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.677812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.677819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.678045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.678054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.678379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.678386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.132 qpair failed and we were unable to recover it. 00:30:51.132 [2024-07-15 15:36:00.678697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.132 [2024-07-15 15:36:00.678704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.679014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.679021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.679352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.679358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.679681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.679688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.680010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.680016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.680331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.680337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.680738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.680745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.680904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.680913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.681196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.681202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.681501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.681508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.681705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.681712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.681895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.681903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.682130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.682136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.682414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.682420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.682752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.682759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.683067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.683074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 901174 Killed "${NVMF_APP[@]}" "$@" 00:30:51.133 [2024-07-15 15:36:00.683305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.683312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.683625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.683632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 15:36:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:51.133 [2024-07-15 15:36:00.683952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.683959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.684244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.684251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 15:36:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:51.133 15:36:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:51.133 [2024-07-15 15:36:00.684598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.684606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.684812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.684820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 15:36:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:51.133 15:36:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.133 [2024-07-15 15:36:00.685133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.685140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.685342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.685353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.685677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.685684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.685889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.685896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.686123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.686130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.686300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.686308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.686612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.686618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.686911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.686918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.687244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.687250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.687446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.687453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.687643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.687650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.687953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.687960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.688301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.688308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.688586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.688592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.688864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.688871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.689101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.689108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.689286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.689294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.689597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.689605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.689891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.689898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.690216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.690223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.690541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.690549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.690846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.690853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.691186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.133 [2024-07-15 15:36:00.691193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.133 qpair failed and we were unable to recover it. 00:30:51.133 [2024-07-15 15:36:00.691506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.691514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.691829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.691836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.692162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.692170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.692465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.692473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 15:36:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=902238 00:30:51.134 [2024-07-15 15:36:00.692654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.692663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 15:36:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 902238 00:30:51.134 [2024-07-15 15:36:00.693055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.693064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 15:36:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 902238 ']' 00:30:51.134 15:36:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:51.134 [2024-07-15 15:36:00.693397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.693407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 15:36:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.134 15:36:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:51.134 [2024-07-15 15:36:00.693741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.693750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 15:36:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.134 [2024-07-15 15:36:00.693975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.693983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 15:36:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:51.134 [2024-07-15 15:36:00.694193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.694204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 15:36:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.134 [2024-07-15 15:36:00.694514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.694524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.694864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.694872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.695200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.695208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.695535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.695543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.695806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.695814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.695947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.695955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.696135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.696142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.696463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.696471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.696754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.696762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.697074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.697083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.697333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.697341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.697659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.697667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.697852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.697860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.698106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.698114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.698421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.698429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.698752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.698760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.699074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.699081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.699369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.699376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.699688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.699695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.699893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.699900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.700187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.700195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.700375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.700383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.700729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.700737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.701068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.701076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.701412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.701421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.701713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.701723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.702039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.702046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.702377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.702385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.702672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.702680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.702963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.702971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.703304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.703312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.703622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.703629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.703837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.703844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.704069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.704076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.704262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.704270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.704615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.704623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.704957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.704966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.705149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.705157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.705432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.705439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.705637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.705645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.705919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.705928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.706228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.134 [2024-07-15 15:36:00.706236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.134 qpair failed and we were unable to recover it. 00:30:51.134 [2024-07-15 15:36:00.706507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.706515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.706828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.706835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.707067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.707075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.707371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.707378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.707716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.707723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.707936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.707943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.708179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.708187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.708487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.708494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.708804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.708811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.709071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.709077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.709247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.709256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.709702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.709708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.709934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.709941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.710235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.710243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.710560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.710567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.710864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.710871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.711186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.711193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.711494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.711500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.711802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.711809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.712124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.712131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.712452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.712460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.712735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.712742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.713060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.713068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.713377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.713384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.713544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.713552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.713877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.713886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.714144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.714151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.714473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.714480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.135 [2024-07-15 15:36:00.714670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.135 [2024-07-15 15:36:00.714677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.135 qpair failed and we were unable to recover it. 00:30:51.411 [2024-07-15 15:36:00.714865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.411 [2024-07-15 15:36:00.714873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.411 qpair failed and we were unable to recover it. 00:30:51.411 [2024-07-15 15:36:00.715065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.411 [2024-07-15 15:36:00.715072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.411 qpair failed and we were unable to recover it. 00:30:51.411 [2024-07-15 15:36:00.715408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.411 [2024-07-15 15:36:00.715417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.411 qpair failed and we were unable to recover it. 00:30:51.411 [2024-07-15 15:36:00.715720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.411 [2024-07-15 15:36:00.715728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.411 qpair failed and we were unable to recover it. 00:30:51.411 [2024-07-15 15:36:00.716034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.411 [2024-07-15 15:36:00.716041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.411 qpair failed and we were unable to recover it. 00:30:51.411 [2024-07-15 15:36:00.716345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.411 [2024-07-15 15:36:00.716353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.411 qpair failed and we were unable to recover it. 00:30:51.411 [2024-07-15 15:36:00.716688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.411 [2024-07-15 15:36:00.716695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.411 qpair failed and we were unable to recover it. 00:30:51.411 [2024-07-15 15:36:00.717007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.717014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.717338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.717346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.717635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.717642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.717850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.717857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.718179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.718186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.718229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.718236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.718407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.718414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.718795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.718802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.719127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.719134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.719429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.719436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.719738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.719744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.720073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.720081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.720366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.720374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.720692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.720699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.721074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.721083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.721388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.721395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.721699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.721706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.722000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.722007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.722334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.722340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.722661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.722668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.722925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.722932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.723244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.723251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.723464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.723471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.723782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.723790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.724163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.724170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.724467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.724474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.724688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.724695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.412 [2024-07-15 15:36:00.725042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.412 [2024-07-15 15:36:00.725048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.412 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.725377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.725384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.725697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.725705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.725903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.725911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.726079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.726088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.726262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.726269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.726498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.726506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.726811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.726818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.727199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.727207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.727396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.727404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.727759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.727766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.728095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.728103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.728273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.728280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.728533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.728540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.728846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.728854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.729185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.729192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.729482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.729490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.729826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.729833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.730223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.730230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.730426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.730433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.730602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.730609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.730821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.730828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.731070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.731077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.731371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.731378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.731683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.731689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.731894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.731901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.732187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.732194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.732577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.732585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.413 [2024-07-15 15:36:00.732890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.413 [2024-07-15 15:36:00.732897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.413 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.733205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.733213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.733530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.733537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.733712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.733719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.734019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.734026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.734348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.734354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.734651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.734657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.734994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.735001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.735335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.735342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.735669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.735677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.736019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.736027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.736216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.736224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.736544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.736550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.736754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.736761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.737071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.737078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.737478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.737485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.737776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.737783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.738096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.738103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.738510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.738517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.738813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.738820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.739171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.739178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.739349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.739357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.739680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.739687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.740000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.740007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.740307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.740313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.740615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.740622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.741041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.741048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.741353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.414 [2024-07-15 15:36:00.741359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.414 qpair failed and we were unable to recover it. 00:30:51.414 [2024-07-15 15:36:00.741662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.741668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.741963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.741970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.742318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.742326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.742634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.742641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.742943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.742950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.743143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.743151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.743466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.743473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.743773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.743780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.744065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.744072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.744388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.744395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.744674] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:30:51.415 [2024-07-15 15:36:00.744710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.744719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.744726] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.415 [2024-07-15 15:36:00.745021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.745030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.745349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.745355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.745690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.745697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.746007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.746015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.746334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.746341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.746625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.746633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.746943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.746951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.747274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.747282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.747594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.747601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.747797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.747805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.748119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.748127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.748440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.748447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.748761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.748770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.749062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.749069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.749251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.749259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.749535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.415 [2024-07-15 15:36:00.749542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.415 qpair failed and we were unable to recover it. 00:30:51.415 [2024-07-15 15:36:00.749724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.749732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.750048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.750056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.750247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.750254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.750560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.750568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.750880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.750891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.751203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.751211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.751544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.751551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.751865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.751872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.752205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.752213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.752494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.752501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.752859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.752867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.753186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.753195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.753506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.753513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.753820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.753828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.754027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.754035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.754406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.754414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.754716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.754723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.755017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.755025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.755346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.755354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.755665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.755672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.755891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.755898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.756194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.756201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.756522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.756530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.756819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.756827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.757144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.757152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.757452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.757459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.757758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.757766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.416 [2024-07-15 15:36:00.758068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.416 [2024-07-15 15:36:00.758076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.416 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.758404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.758412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.758713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.758721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.759050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.759058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.759377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.759384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.759593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.759600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.759891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.759898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.760207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.760215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.760404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.760411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.760724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.760734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.761076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.761083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.761299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.761306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.761629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.761636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.761948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.761963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.762305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.762312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.762469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.762476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.762812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.762819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.763133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.763140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.763320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.763327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.763717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.763724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.764038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.764044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.764381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.764388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.764592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.764599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.417 [2024-07-15 15:36:00.764920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.417 [2024-07-15 15:36:00.764927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.417 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.765054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.765060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.765358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.765365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.765631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.765638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.765838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.765844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.766131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.766138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.766450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.766458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.766804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.766812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.767129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.767136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.767428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.767435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.767747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.767754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.768078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.768084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.768285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.768291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.768618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.768626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.768815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.768822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.769041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.769048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.769353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.769359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.769676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.769683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.769879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.769888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.770098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.770105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.770414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.770421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.770718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.770724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.771054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.771062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.771385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.771393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.771698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.771705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.772011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.772018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.418 qpair failed and we were unable to recover it. 00:30:51.418 [2024-07-15 15:36:00.772354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.418 [2024-07-15 15:36:00.772362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.772624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.772630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.772942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.772949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.773337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.773343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.773531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.773539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.773868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.773874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.774176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.774183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.774471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.774478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.774782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.774790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.774962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.774969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.775316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.775322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.775630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.775636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.775951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.775958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.776287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.776294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.776479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.776487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.776793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.776799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.777105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.777112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.777366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.777374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.777545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.777552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.777861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.777867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.778174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.778181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.778516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.778523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.778826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.778833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.779053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.779060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.779372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.779379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.779690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.779697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.419 [2024-07-15 15:36:00.779996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.780005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.780311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.419 [2024-07-15 15:36:00.780318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.419 qpair failed and we were unable to recover it. 00:30:51.419 [2024-07-15 15:36:00.780628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.780635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.780948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.780955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.781130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.781138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.781440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.781447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.781751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.781757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.781943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.781950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.782315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.782322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.782679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.782686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.782994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.783001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.783353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.783360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.783556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.783563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.783757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.783764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.784066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.784074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.784282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.784289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.784601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.784609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.784727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.784734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2720000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.785113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.785150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.785475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.785487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.785889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.785900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.786308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.786343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.786649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.786662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.786848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.786860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.787197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.787207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.787530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.787539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.787858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.787867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.788167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.788177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.788521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.788530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.788854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.420 [2024-07-15 15:36:00.788864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.420 qpair failed and we were unable to recover it. 00:30:51.420 [2024-07-15 15:36:00.789200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.789210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.789541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.789551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.789734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.789744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.790058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.790070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.790264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.790275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.790609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.790619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.790919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.790929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.791280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.791290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.791622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.791631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.791842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.791851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.792169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.792179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.792588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.792597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.792906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.792917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.793264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.793274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.793564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.793574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.793770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.793780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.793972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.793983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.794355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.794364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.794656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.794665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.795007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.795018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.795207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.795217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.795489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.795499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.795635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.795645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.795953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.795964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.796287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.796299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.796517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.796526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.796835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.796845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.421 qpair failed and we were unable to recover it. 00:30:51.421 [2024-07-15 15:36:00.797219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.421 [2024-07-15 15:36:00.797229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.797531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.797540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.797850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.797859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.798142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.798152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.798570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.798579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.798869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.798879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.799085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.799095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.799400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.799411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.799608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.799618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.799912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.799923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.800144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.800153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.800379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.800388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.800707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.800716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.800909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.800920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.801304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.801313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.801485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.801496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.801799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.801808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.802127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.802137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.802290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.802299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.802680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.802691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.803038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.803048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.803370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.803380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.803545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.803555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.803935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.803947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.804322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.804332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.804528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.804539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.804880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.804893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.805092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.805102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.422 qpair failed and we were unable to recover it. 00:30:51.422 [2024-07-15 15:36:00.805450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.422 [2024-07-15 15:36:00.805460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.805778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.805788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.806191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.806200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.806493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.806503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.806795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.806804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.807067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.807077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.807142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.807151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.807466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.807476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.807727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.807737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.808058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.808072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.808366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.808376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.808569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.808579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.808899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.808910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.809238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.809249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.809601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.809612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.809908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.809918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.810119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.810128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.810481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.810490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.810815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.810824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.811146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.811156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.811461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.811471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.811672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.811682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.811906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.811916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.812239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.812250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.812577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.812587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.812874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.812897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.813207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.813217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.813541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.813550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.423 qpair failed and we were unable to recover it. 00:30:51.423 [2024-07-15 15:36:00.813927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.423 [2024-07-15 15:36:00.813938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.814263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.814273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.814589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.814598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.814906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.814916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.815215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.815225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.815536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.815547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.815605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.815616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.815908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.815918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.816251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.816261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.816630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.816640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.816810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.816820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.817088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.817098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.817413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.817423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.817633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.817643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.817867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.817876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.818243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.818253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.818327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.818336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.818535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.818545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.818861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.818871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.819192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.819203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.819524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.819535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.819725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.424 [2024-07-15 15:36:00.819737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.424 qpair failed and we were unable to recover it. 00:30:51.424 [2024-07-15 15:36:00.820055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.820065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.820388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.820397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.820708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.820717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.821043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.821053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.821259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.821268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.821553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.821563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.821874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.821887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.822191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.822201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.822475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.822485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.822790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.822800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.822976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.822986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.823220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.823229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.823557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.823566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.823927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.823937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.824242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.824251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.824562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.824573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.824875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.824894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.825118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.825129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.825462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.825472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.825786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.825795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.826178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.826188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.826481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.826491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.826891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.826901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.827191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.827201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.827495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.827505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.827797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.827807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.828120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.828131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.425 qpair failed and we were unable to recover it. 00:30:51.425 [2024-07-15 15:36:00.828431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.425 [2024-07-15 15:36:00.828441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.828751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.828761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.829069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.829079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.829272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.829283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.829600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.829611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.829925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.829935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.830243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.830253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.830487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.830497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.830687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.830697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.830749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.830760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.831081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.831091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.831388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.831398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.831584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.831596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.831962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.831971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.832174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.832184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.832502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.832512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.832817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.832826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.833190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.833200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.833497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.833506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.833903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.833913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.834101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.834110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.834316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.834326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.834670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.834680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.834908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.834918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.835246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.835256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.835572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.835581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.835911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.835921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.836283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.426 [2024-07-15 15:36:00.836293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.426 qpair failed and we were unable to recover it. 00:30:51.426 [2024-07-15 15:36:00.836455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.836465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.836779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.836788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.837116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.837125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.837313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:51.427 [2024-07-15 15:36:00.837417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.837426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.837705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.837714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.837932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.837942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.838286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.838296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.838592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.838602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.838917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.838926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.839097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.839107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.839432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.839442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.839760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.839770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.840094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.840105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.840380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.840389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.840696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.840705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.841114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.841124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.841453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.841462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.841753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.841764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.842075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.842086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.842376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.842387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.842685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.842695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.843035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.843045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.843163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.843172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.843404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.843413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.843756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.843767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.844088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.844099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.844403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.844412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.427 [2024-07-15 15:36:00.844727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.427 [2024-07-15 15:36:00.844737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.427 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.844993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.845003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.845329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.845338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.845556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.845566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.845753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.845763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.845981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.845991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.846178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.846188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.846500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.846509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.846821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.846830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.847137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.847147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.847414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.847426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.847754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.847764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.848094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.848104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.848419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.848429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.848723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.848733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.848985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.848996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.849328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.849338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.849556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.849566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.849792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.849801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.850111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.850121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.850328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.850338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.850708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.850718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.850937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.850946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.851322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.851332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.851628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.851638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.851840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.851850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.852150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.852160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.852448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.852457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.428 [2024-07-15 15:36:00.852762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.428 [2024-07-15 15:36:00.852772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.428 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.853090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.853101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.853307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.853317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.853647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.853657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.853849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.853858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.854188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.854198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.854512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.854521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.854657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.854668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.855059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.855069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.855401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.855412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.855575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.855585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.855829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.855838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.856265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.856274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.856580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.856589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.856861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.856871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.857233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.857243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.857524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.857535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.857826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.857836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.858223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.858233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.858403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.858414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.858707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.858716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.859051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.859061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.859389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.859399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.859717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.859727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.859901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.859911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.860309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.860318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.860642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.860651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.860965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.860975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.861185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.861195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.861514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.861524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.429 qpair failed and we were unable to recover it. 00:30:51.429 [2024-07-15 15:36:00.861746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.429 [2024-07-15 15:36:00.861756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.862061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.862071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.862389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.862398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.862687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.862696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.862982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.862992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.863320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.863330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.863523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.863534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.863860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.863870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.864214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.864225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.864443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.864452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.864658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.864670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.864992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.865003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.865324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.865334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.865537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.865546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.865726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.865736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.866095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.866105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.866468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.866477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.866770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.866779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.866995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.867005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.867222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.867233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.867346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.867356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.867666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.867675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.868011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.868021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.868393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.868403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.430 [2024-07-15 15:36:00.868717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.430 [2024-07-15 15:36:00.868727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.430 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.869045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.869056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.869367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.869376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.869716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.869726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.869915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.869932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.870317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.870328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.870668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.870679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.871055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.871066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.871384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.871393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.871719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.871729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.871934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.871944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.872267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.872277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.872471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.872482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.872797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.872807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.873129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.873140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.873441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.873451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.873645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.873656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.874020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.874031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.874201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.874211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.874583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.874593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.874888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.874899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.875217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.875227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.875530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.875540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.875853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.875862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.876155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.876166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.876479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.876489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.876674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.876686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.876947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.876958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.877290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.877300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.877511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.877521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.877829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.877838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.878173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.878183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.878504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.878514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.878805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.878815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.879131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.879141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.879432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.879445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.879783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.879793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.880134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.880144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.880523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.880532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.431 qpair failed and we were unable to recover it. 00:30:51.431 [2024-07-15 15:36:00.880785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.431 [2024-07-15 15:36:00.880794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.881112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.881123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.881416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.881426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.881771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.881781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.882098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.882108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.882387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.882397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.882731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.882741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.883052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.883062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.883267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.883276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.883467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.883479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.883756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.883765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.883955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.883971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.884313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.884323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.884614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.884623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.884922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.884932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.885248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.885258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.885607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.885616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.885937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.885947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.886274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.886283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.886474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.886484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.886656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.886666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.886976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.886986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.887273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.887284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.887624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.887634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.887984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.887994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.888293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.888303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.888619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.888628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.888930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.888940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.889229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.889239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.889537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.889547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.889603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.889613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.889899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.889909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.890231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.890240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.890613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.890622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.890916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.890926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.891261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.891270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.891611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.891623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.892035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.892045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.892321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.892331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.892654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.892663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.892962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.892972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.893298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.893308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.893636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.893646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.893946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.432 [2024-07-15 15:36:00.893957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.432 qpair failed and we were unable to recover it. 00:30:51.432 [2024-07-15 15:36:00.894275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.894284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.894623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.894633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.894957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.894966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.895255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.895264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.895564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.895573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.895866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.895876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.896171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.896181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.896369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.896379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.896766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.896776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.897108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.897118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.897386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.897395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.897693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.897703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.898027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.898036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.898357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.898366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.898671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.898681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.898970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.898980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.899271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.899281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.899479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.899488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.899805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.899814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.899970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.899980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.900312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.900321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.900612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.900622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.900918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.900927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.901245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.901254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.901640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.901650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.901989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.901999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.902320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.902330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.902588] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.433 [2024-07-15 15:36:00.902612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.902620] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events[2024-07-15 15:36:00.902622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b9 at runtime. 00:30:51.433 0 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.902631] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.433 [2024-07-15 15:36:00.902638] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.433 [2024-07-15 15:36:00.902643] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.433 [2024-07-15 15:36:00.902810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.902819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.902806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:51.433 [2024-07-15 15:36:00.902966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:51.433 [2024-07-15 15:36:00.903051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:51.433 [2024-07-15 15:36:00.903048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:51.433 [2024-07-15 15:36:00.903272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.903282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.903595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.903605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.903938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.903947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.904148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.904158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.904505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.904514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.904808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.904817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.905020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.905030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.905233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.905243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.905590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.905600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.905924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.905934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.906117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.906128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.433 [2024-07-15 15:36:00.906338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.433 [2024-07-15 15:36:00.906348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.433 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.906544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.906554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.906915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.906927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.907247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.907256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.907582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.907592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.907891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.907901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.908121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.908130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.908468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.908477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.908772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.908781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.909114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.909124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.909429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.909439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.909644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.909654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.910003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.910014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.910196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.910207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.910417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.910427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.910610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.910621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.910927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.910937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.911242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.911252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.911442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.911451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.911630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.911640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.911948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.911957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.912170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.912179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.912414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.912423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.912734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.912743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.912929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.912939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.913163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.913172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.913515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.913524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.913841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.913851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.914059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.914069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.914407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.914417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.914603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.914614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.914726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.914735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.914904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.914914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.915078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.915088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.915382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.915391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.915583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.915594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.915960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.915970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.916295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.434 [2024-07-15 15:36:00.916305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.434 qpair failed and we were unable to recover it. 00:30:51.434 [2024-07-15 15:36:00.916425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.916434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.916740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.916750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.917135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.917145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.917436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.917445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.917752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.917764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.918149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.918158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.918357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.918367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.918694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.918704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.919047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.919057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.919262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.919271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.919459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.919468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.919596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.919606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.919915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.919925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.920220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.920229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.920561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.920571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.920748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.920759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.920959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.920970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.921144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.921153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.921504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.921514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.921809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.921819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.921999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.922009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.922370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.922380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.922784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.922794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.923096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.923105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.923280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.923290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.923497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.923508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.923804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.923813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.924113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.924123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.924322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.924331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.924629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.924640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.924950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.924961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.925276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.925287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.925608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.925618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.925942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.925952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.926151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.926161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.926514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.926524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.926750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.926759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.927053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.927063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.927399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.927409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.927706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.927716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.928025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.928036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.928333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.928343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.928539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.928549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.928882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.435 [2024-07-15 15:36:00.928896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.435 qpair failed and we were unable to recover it. 00:30:51.435 [2024-07-15 15:36:00.929098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.929111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.929334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.929345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.929697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.929708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.929892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.929903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.930125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.930135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.930399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.930409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.930742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.930752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.930923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.930934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.931130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.931140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.931355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.931364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.931705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.931715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.931910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.931920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.932234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.932243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.932585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.932594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.932901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.932911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.933022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.933031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.933199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.933208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.933379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.933389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.933599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.933608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.933923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.933934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.934261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.934270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.934503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.934513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.934843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.934852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.935148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.935158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.935472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.935482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.935656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.935666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.935875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.935888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.936116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.936127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.936459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.936469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.936813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.936823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.937133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.937143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.937469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.937478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.937670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.937681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.938028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.938038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.938283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.938293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.938464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.938474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.938762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.938773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.939088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.939098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.939422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.939432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.939699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.939709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.940029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.940043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.940369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.940378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.940558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.940568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.436 qpair failed and we were unable to recover it. 00:30:51.436 [2024-07-15 15:36:00.940901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.436 [2024-07-15 15:36:00.940910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.941232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.941241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.941579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.941588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.941866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.941875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.942289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.942298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.942683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.942693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.942871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.942881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.943226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.943236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.943450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.943460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.943637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.943647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.943963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.943973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.944282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.944291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.944482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.944492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.944780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.944790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.945139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.945149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.945443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.945453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.945641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.945651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.945927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.945936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.946277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.946286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.946580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.946590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.946888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.946899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.947092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.947102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.947449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.947458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.947834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.947844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.948169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.948182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.948483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.948493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.948839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.948849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.949181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.949192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.949358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.949368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.949593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.949603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.949794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.949804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.949996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.950006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.950191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.950201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.950528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.950538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.950834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.950844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.951030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.951041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.951448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.951457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.951641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.951651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.952001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.952013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.952335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.952345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.952688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.952697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.953027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.953037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.953342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.953352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.437 [2024-07-15 15:36:00.953553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.437 [2024-07-15 15:36:00.953563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.437 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.953874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.953887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.954063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.954074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.954272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.954282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.954458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.954467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.954748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.954757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.955071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.955081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.955268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.955278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.955487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.955496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.955831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.955841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.956155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.956165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.956541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.956550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.956853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.956863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.957171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.957180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.957488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.957498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.957842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.957851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.958054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.958064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.958398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.958407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.958587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.958597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.958930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.958940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.959244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.959254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.959442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.959453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.959842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.959851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.960176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.960186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.960383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.960394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.960583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.960593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.960919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.960929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.961237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.961246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.961546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.961555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.961771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.961781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.961933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.961942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.962237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.962247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.962581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.962590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.962795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.962804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.963119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.963128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.963438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.963448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.963765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.963774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.964071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.964081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.438 [2024-07-15 15:36:00.964365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.438 [2024-07-15 15:36:00.964374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.438 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.964696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.964705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.964927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.964937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.965119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.965128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.965264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.965274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.965591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.965600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.965904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.965914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.966340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.966349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.966655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.966664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.967035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.967045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.967374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.967383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.967561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.967571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.967736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.967745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.968068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.968078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.968277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.968286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.968671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.968680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.969011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.969021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.969211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.969220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.969555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.969565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.969886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.969897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.970198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.970207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.970598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.970607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.971016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.971026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.971252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.971266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.971425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.971435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.971668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.971678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.971882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.971896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.972223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.972233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.972556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.972566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.972935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.972945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.973236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.973245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.973571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.973581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.973763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.973772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.974089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.974099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.974381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.974390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.974686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.974695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.975100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.975111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.975302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.975311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.975591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.975600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.975798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.975807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.976185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.976194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.976549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.976559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.439 [2024-07-15 15:36:00.976941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.439 [2024-07-15 15:36:00.976950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.439 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.977162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.977172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.977506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.977515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.977821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.977830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.978056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.978067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.978400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.978409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.978716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.978725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.978928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.978945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.979246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.979256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.979569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.979579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.979926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.979936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.980142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.980151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.980540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.980549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.980739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.980749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.980950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.980960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.981275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.981284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.981576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.981585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.981878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.981898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.982240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.982249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.982535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.982545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.982649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.982658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.982937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.982950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.983135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.983145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.983391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.983401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.983722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.983731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.984154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.984164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.984341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.984351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.984653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.984662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.984977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.984987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.985302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.985311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.985639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.985648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.985958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.985967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.986299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.986309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.986484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.986495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.986888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.986898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.987053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.987063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.987475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.987484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.987683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.987692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.987934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.987943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.988272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.988281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.988579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.988588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.988913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.988923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.989251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.989260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.440 [2024-07-15 15:36:00.989578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.440 [2024-07-15 15:36:00.989588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.440 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.989786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.989795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.990120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.990130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.990313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.990324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.990656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.990666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.990982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.990992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.991322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.991332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.991655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.991664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.991960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.991970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.992236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.992245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.992570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.992579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.992753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.992763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.992962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.992971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.993279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.993288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.993612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.993621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.993948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.993957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.994120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.994128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.994453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.994462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.994776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.994787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.995123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.995133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.995329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.995339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.995687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.995696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.995859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.995868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.996227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.996237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.996551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.996560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.996959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.996969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.997158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.997167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.997450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.997459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.997780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.997790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.997969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.997979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.998195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.998205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.998387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.998398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.998746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.998755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.998803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.998811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.999193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.999202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.999497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.999507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:00.999845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:00.999855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:01.000045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:01.000054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:01.000337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:01.000347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:01.000693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:01.000702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:01.001029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:01.001039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.441 [2024-07-15 15:36:01.001186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.441 [2024-07-15 15:36:01.001195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.441 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.001499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.001508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.001836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.001845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.002146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.002155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.002470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.002479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.002849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.002859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.003024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.003034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.003182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.003192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.003561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.003570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.003768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.003778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.003958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.003968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.004277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.004287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.004628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.004637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.004983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.004993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.005318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.005327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.005620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.005630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.005932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.005941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.006155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.006168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.006548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.006558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.006898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.006908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.007141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.007150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.007443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.007452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.007795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.007805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.008210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.008219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.008408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.008418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.008740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.008749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.009094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.009104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.009275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.009285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.009492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.009501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.009881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.009894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.010059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.010069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.010383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.010393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.010599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.010609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.010902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.010912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.442 [2024-07-15 15:36:01.011221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.442 [2024-07-15 15:36:01.011230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.442 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.011412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.011421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.011818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.011827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.012139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.012148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.012305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.012314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.012639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.012649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.012949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.012959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.013350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.013359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.013571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.013581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.013918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.013927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.014273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.014282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.014583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.014592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.014767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.014777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.014950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.014960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.015260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.015269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.015578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.015587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.015909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.015919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.016144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.016153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.016518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.016527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.443 [2024-07-15 15:36:01.016702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.443 [2024-07-15 15:36:01.016712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.443 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.016895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.016905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.017192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.017202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.017512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.017521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.017692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.017703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.018058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.018068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.018400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.018409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.018708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.018717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.018855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.018864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.019186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.019195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.019395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.019405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.019594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.019604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.019965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.019974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.020202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.020212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.020417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.020427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.020747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.020756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.021071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.021081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.021406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.021415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.021641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.021650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.021948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.021958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.022159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.022169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.022468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.022477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.022876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.022890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.023062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.023072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.023467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.023476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.023664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.023674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.023986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.023995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.024311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.024320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.024609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.024619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.024830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.720 [2024-07-15 15:36:01.024839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.720 qpair failed and we were unable to recover it. 00:30:51.720 [2024-07-15 15:36:01.025119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.025129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.025179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.025188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.025500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.025510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.025863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.025873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.026199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.026208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.026531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.026541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.026886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.026897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.027088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.027097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.027421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.027430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.027602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.027613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.027930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.027939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.028240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.028249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.028598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.028607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.028658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.028666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.028951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.028966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.029162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.029171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.029502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.029511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.029700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.029710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.029929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.029939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.030103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.030113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.030322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.030331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.030677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.030686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.031044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.031054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.031372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.031382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.031587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.031596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.031788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.031797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.032005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.032014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.032160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.032169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.032381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.032390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.032600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.032609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.032917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.032927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.033252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.033261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.033660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.033669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.033966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.033975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.034267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.034276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.034571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.034580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.034827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.034837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.035158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.035167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.035393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.721 [2024-07-15 15:36:01.035402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.721 qpair failed and we were unable to recover it. 00:30:51.721 [2024-07-15 15:36:01.035739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.035748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.036036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.036045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.036244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.036253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.036633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.036643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.036960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.036969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.037139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.037148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.037461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.037470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.037785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.037794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.037973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.037983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.038350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.038360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.038701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.038711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.038955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.038964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.039128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.039137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.039334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.039343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.039533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.039542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.039775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.039786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.040105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.040115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.040414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.040424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.040780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.040790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.041160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.041170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.041494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.041504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.041654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.041663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.041864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.041873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.042256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.042265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.042618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.042627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.042924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.042934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.043248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.043258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.043475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.043485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.043769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.043778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.044114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.044123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.044470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.044479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.044653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.044663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.044828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.044838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.045203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.045213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.045401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.045411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.045729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.045738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.046040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.046050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.046344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.046353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.722 [2024-07-15 15:36:01.046742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.722 [2024-07-15 15:36:01.046751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.722 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.046948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.046958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.047271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.047281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.047611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.047621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.047757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.047767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.048037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.048047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.048274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.048284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.048591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.048601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.048778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.048787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.048836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.048845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.049162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.049172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.049490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.049499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.049801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.049810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.049981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.049991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.050348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.050357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.050679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.050688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.050878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.050895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.051211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.051223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.051405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.051415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.051706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.051715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.052002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.052012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.052219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.052229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.052418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.052427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.052737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.052746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.053129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.053139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.053517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.053527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.053865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.053875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.054220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.054230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.054420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.054431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.054798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.054808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.054990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.054999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.055197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.055211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.055426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.055435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.055779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.055788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.055850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.055859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.056154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.056164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.056363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.056374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.723 [2024-07-15 15:36:01.056699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.723 [2024-07-15 15:36:01.056709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.723 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.056901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.056912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.057267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.057276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.057490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.057500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.057669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.057678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.057890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.057901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.058100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.058110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.058438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.058448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.058726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.058736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.059064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.059074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.059244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.059254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.059424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.059435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.059752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.059761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.059809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.059817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.060101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.060111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.060439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.060449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.060670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.060679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.060986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.060996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.061316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.061325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.061521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.061531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.061866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.061878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.062176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.062186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.062374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.062383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.062689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.062698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.062911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.062921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.063107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.063117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.063424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.063433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.063749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.063758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.064063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.064073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.064411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.064421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.064647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.064657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.724 [2024-07-15 15:36:01.064822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.724 [2024-07-15 15:36:01.064832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.724 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.065202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.065213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.065555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.065565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.065952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.065962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.066264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.066273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.066637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.066646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.066952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.066962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.067289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.067299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.067626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.067635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.067916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.067925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.068234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.068244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.068544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.068554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.068866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.068876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.069194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.069204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.069520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.069529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.069882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.069896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.070191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.070201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.070520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.070530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.070876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.070896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.071212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.071221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.071518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.071528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.071821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.071831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.072035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.072046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.072358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.072368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.072420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.072429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.072813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.072823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.072998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.073010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.073296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.073306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.073696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.073706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.074059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.074071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.074391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.074401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.074691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.074701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.075005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.075015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.075339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.075348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.075651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.075661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.076055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.076065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.076369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.076378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.076678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.076688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.077086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.077096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.077200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.725 [2024-07-15 15:36:01.077210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.725 qpair failed and we were unable to recover it. 00:30:51.725 [2024-07-15 15:36:01.077521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.077530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.077841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.077851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.078151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.078161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.078455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.078465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.078651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.078661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.078984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.078994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.079321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.079330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.079674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.079684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.079870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.079881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.080188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.080198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.080336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.080346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.080518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.080527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.080831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.080841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.081144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.081154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.081484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.081493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.081786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.081795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.082120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.082130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.082270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.082280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.082500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.082509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.082801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.082811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.083003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.083013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.083404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.083414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.083729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.083739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.083892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.083901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.084097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2270a60 is same with the state(5) to be set 00:30:51.726 [2024-07-15 15:36:01.084590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.084624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.084929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.084942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.085410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.085446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.085753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.085765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.086178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.086190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.086488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.086499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.086879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.086896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.087313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.087324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.087636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.087646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.087967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.087979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.088298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.088307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.088530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.088539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.726 qpair failed and we were unable to recover it. 00:30:51.726 [2024-07-15 15:36:01.088887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.726 [2024-07-15 15:36:01.088901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.089089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.089106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.089325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.089335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.089643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.089652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.089951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.089961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.090210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.090220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.090546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.090555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.090860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.090869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.091199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.091209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.091263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.091271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.091589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.091598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.091901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.091911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.092159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.092168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.092495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.092504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.092838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.092848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.093217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.093228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.093553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.093563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.093923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.093933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.094250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.094259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.094440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.094450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.094789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.094801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.095030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.095040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.095227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.095236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.095563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.095572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.095897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.095906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.096209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.096218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.096590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.096599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.096920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.096931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.097114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.097124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.097420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.097429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.097728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.097737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.097964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.097975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.098292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.098302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.098469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.098478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.098862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.098872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.099073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.099083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.099272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.099289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.099505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.099514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.099595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.099605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.099809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.727 [2024-07-15 15:36:01.099819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.727 qpair failed and we were unable to recover it. 00:30:51.727 [2024-07-15 15:36:01.100138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.100147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.100341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.100350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.100680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.100689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.100746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.100756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.101065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.101075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.101403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.101412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.101638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.101647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.101975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.101990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.102310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.102319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.102621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.102630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.102959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.102968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.103291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.103301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.103640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.103650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.103735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.103744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.103847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.103857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.104069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.104079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.104266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.104275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.104590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.104600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.104919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.104930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.105263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.105272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.105579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.105588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.105902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.105911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.106086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.106095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.106315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.106326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.106376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.106385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.106560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.106570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.106909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.106919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.107112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.107122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.107307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.107316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.107509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.107519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.107867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.107876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.108041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.108050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.108361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.108370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.108418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.108426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.108716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.108727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.109116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.109126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.109480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.728 [2024-07-15 15:36:01.109489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.728 qpair failed and we were unable to recover it. 00:30:51.728 [2024-07-15 15:36:01.109673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.109683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.109887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.109897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.110061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.110070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.110438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.110448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.110633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.110643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.110839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.110849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.111047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.111057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.111386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.111396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.111742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.111751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.112014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.112025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.112354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.112364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.112584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.112594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.112852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.112861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.113205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.113215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.113556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.113565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.113737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.113747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.113954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.113964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.114208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.114217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.114514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.114523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.114573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.114583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.114872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.114881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.115050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.115061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.115374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.115383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.115607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.115624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.115954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.115964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.116312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.116321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.116495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.116506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.116827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.116837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.117010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.117020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.117409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.117418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.117807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.117817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.118191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.118201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.118601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.118610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.118924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.729 [2024-07-15 15:36:01.118934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.729 qpair failed and we were unable to recover it. 00:30:51.729 [2024-07-15 15:36:01.119264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.119273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.119442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.119451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.119822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.119831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.120148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.120157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.120466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.120476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.120648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.120658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.121054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.121065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.121288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.121297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.121636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.121645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.121818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.121828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.122154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.122163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.122471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.122481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.122663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.122673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.122984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.122994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.123330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.123339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.123638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.123647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.123846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.123856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.124200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.124209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.124387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.124397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.124797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.124806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.125125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.125135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.125468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.125477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.125651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.125660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.125923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.125933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.126248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.126257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.126571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.126580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.126810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.126820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.127082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.127091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.127271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.127280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.127509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.127519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.127813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.127823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.128155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.128167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.128530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.128539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.128845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.128855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.129046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.129056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.129257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.129266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.129627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.129636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.129934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.129944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.130265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.130274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.130474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.130483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.730 [2024-07-15 15:36:01.130793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.730 [2024-07-15 15:36:01.130803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.730 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.131008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.131018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.131339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.131349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.131740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.131749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.131922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.131933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.132110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.132121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.132443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.132453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.132670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.132680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.133026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.133036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.133369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.133378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.133555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.133565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.133752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.133761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.133965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.133976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.134275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.134284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.134594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.134602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.134907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.134917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.135096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.135106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.135579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.135591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.135894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.135913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.136235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.136244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.136659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.136668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.136710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.136718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.136895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.136905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.137103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.137112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.137356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.137366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.137675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.137685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.137850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.137860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.138168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.138177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.138522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.138532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.138817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.138827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.139051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.139060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.139227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.139237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.139501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.139511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.139690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.139701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.140035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.140045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.140207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.140217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.140455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.140464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.140615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.140624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.140906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.140915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.141226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.141235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.141529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.731 [2024-07-15 15:36:01.141538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.731 qpair failed and we were unable to recover it. 00:30:51.731 [2024-07-15 15:36:01.141724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.141733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.142127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.142137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.142325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.142334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.142656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.142665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.142963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.142973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.143305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.143314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.143620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.143629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.143787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.143797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.144085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.144096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.144272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.144281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.144605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.144614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.144955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.144965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.145276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.145285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.145674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.145683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.145890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.145900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.146245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.146254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.146447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.146457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.146787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.146796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.146986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.146997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.147286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.147295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.147610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.147620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.147914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.147924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.148240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.148250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.148576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.148586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.148879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.148901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.149221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.149231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.149534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.149544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.149870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.149879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.150173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.150183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.150481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.150491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.150672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.150681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.150851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.150860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.151103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.151112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.151414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.151423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.151726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.151736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.152061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.152073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.152420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.152430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.152739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.152748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.153061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.153072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.153401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.153411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.732 qpair failed and we were unable to recover it. 00:30:51.732 [2024-07-15 15:36:01.153614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.732 [2024-07-15 15:36:01.153624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.154007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.154017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.154334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.154344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.154673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.154683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.155011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.155021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.155353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.155365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.155552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.155562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.155863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.155873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.156180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.156191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.156388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.156397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.156627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.156636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.156919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.156929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.157279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.157288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.157719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.157728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.158049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.158059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.158403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.158412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.158727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.158737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.158924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.158935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.159349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.159358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.159612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.159621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.159924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.159935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.160248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.160257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.160668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.160678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.161025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.161034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.161346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.161355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.161620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.161629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.162012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.162023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.162185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.162195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.162380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.162391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.162710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.162720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.163050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.163061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.163390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.163399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.163604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.163616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.163988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.163999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.164325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.164335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.733 [2024-07-15 15:36:01.164664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.733 [2024-07-15 15:36:01.164674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.733 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.164977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.164987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.165157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.165167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.165526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.165535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.165840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.165849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.166037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.166047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.166397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.166406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.166796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.166805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.167004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.167013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.167305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.167315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.167621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.167631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.167928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.167938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.168108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.168117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.168397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.168406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.168594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.168609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.168952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.168962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.169309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.169318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.169492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.169502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.169834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.169844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.170038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.170048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.170360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.170369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.170650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.170660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.170871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.170881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.171221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.171230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.171442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.171454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.171632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.171643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.171948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.171959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.172149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.172160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.172328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.172337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.172519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.172530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.172829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.172838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.173142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.173152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.173455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.173464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.173824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.173833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.174146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.174156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.174491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.174500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.174685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.174696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.174982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.174992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.175329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.175339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.175539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.175548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.175965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.175975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.734 [2024-07-15 15:36:01.176274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.734 [2024-07-15 15:36:01.176283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.734 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.176581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.176590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.176881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.176894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.177275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.177284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.177586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.177596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.177949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.177959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.178279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.178288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.178632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.178642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.178984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.178994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.179193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.179203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.179498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.179508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.179835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.179844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.180248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.180258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.180454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.180463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.180790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.180799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.181098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.181108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.181400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.181411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.181742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.181752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.182092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.182102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.182417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.182427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.182786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.182796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.183110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.183119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.183468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.183478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.183816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.183825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.184120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.184131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.184432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.184442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.184798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.184807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.185136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.185145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.185490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.185500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.185803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.185813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.186104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.186114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.186410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.186419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.186804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.186813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.187027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.187037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.187384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.187393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.187589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.187599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.187948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.187958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.188273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.188282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.188543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.188552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.188717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.188726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.189060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.189070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.189417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.735 [2024-07-15 15:36:01.189427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.735 qpair failed and we were unable to recover it. 00:30:51.735 [2024-07-15 15:36:01.189622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.189633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.189801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.189811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.190131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.190141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.190462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.190472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.190664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.190675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.190869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.190878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.191100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.191115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.191292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.191302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.191520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.191529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.191870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.191882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.192201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.192211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.192557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.192567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.192738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.192748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.193028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.193038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.193232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.193242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.193584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.193593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.193739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.193749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.193835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.193844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.194028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.194037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.194340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.194349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.194562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.194572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.194904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.194915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.195025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.195035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.195234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.195243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.195607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.195616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.195971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.195981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.196111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.196121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.196429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.196438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.196766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.196775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.196953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.196963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.197340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.197349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.197646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.197655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.197961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.197970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.198308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.198318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.198615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.198624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.198937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.198947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.199255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.199267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.199614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.199624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.199921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.199931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.200115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.200125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.736 [2024-07-15 15:36:01.200500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.736 [2024-07-15 15:36:01.200510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.736 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.200744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.200754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.201064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.201074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.201268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.201278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.201442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.201451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.201738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.201747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.201917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.201927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.202240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.202250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.202555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.202564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.202870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.202879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.203201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.203211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.203400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.203410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.203737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.203746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.203798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.203806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.204113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.204124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.204529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.204538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.204842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.204852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.205044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.205054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.205237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.205247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.205494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.205504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.205840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.205850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.206192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.206202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.206399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.206408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.206759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.206768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.207094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.207103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.207403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.207412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.207604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.207615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.207940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.207950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.208240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.208249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.208581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.208590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.208893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.208903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.209289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.209299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.209572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.209581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.209764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.209772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.210098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.210107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.210285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.210295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.210571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.210580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.210887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.210896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.211192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.211201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.211565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.211574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.737 qpair failed and we were unable to recover it. 00:30:51.737 [2024-07-15 15:36:01.211964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.737 [2024-07-15 15:36:01.211974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.212293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.212303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.212598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.212607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.212801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.212811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.213173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.213182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.213502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.213512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.213705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.213715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.213910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.213921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.214109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.214118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.214428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.214438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.214633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.214642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.214961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.214970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.215273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.215282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.215659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.215668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.215942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.215952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.216008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.216016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.216342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.216351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.216580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.216589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.216920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.216930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.217096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.217105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.217425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.217434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.217632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.217641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.217945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.217955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.218141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.218149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.218474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.218487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.218796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.218805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.219128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.219137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.219189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.219199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.219371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.219380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.219692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.219702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.219888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.219903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.220087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.220097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.220322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.220331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.220645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.220654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.220855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.220866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227acf0 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.221230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.221266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.738 qpair failed and we were unable to recover it. 00:30:51.738 [2024-07-15 15:36:01.221628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.738 [2024-07-15 15:36:01.221640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.221727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.221739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.221878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.221894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.221969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.221977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.222312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.222321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.222494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.222505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.222846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.222856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.223171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.223181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.223511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.223520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.223823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.223833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.224008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.224018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.224232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.224243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.224467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.224476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.224660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.224669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.224987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.224997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.225365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.225375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.225739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.225749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.225912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.225922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.226224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.226234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.226529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.226538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.226867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.226877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.227185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.227195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.227379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.227388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.227715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.227725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.228062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.228072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.228391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.228400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.228566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.228576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.228819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.228828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.229196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.229208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.229376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.229386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.229738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.229748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.230060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.230070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.230374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.230384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.230701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.230710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.230872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.230881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.231276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.231286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.231474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.231485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.231680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.231689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.231872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.231882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.232056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.232066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.232265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.739 [2024-07-15 15:36:01.232275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.739 qpair failed and we were unable to recover it. 00:30:51.739 [2024-07-15 15:36:01.232621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.232631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.232923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.232934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.233219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.233229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.233585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.233594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.233919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.233928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.233978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.233986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.234343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.234352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.234738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.234747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.235046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.235056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.235395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.235405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.235744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.235754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.235967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.235977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.236022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.236031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.236231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.236240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.236574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.236584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.236770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.236779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.237100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.237110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.237407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.237416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.237590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.237600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.237799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.237809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.237990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.238000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.238168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.238177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.238350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.238359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.238687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.238697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.239034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.239043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.239388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.239398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.239599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.239608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.239958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.239969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.240140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.240149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.240491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.240500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.240798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.240808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.241129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.241139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.241440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.241449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.241668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.241677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.242003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.242013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.242347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.242356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.242654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.242663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.242857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.242866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.243043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.243052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.243269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.243284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.740 [2024-07-15 15:36:01.243660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.740 [2024-07-15 15:36:01.243669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.740 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.243718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.243728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.244034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.244044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.244385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.244395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.244514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.244522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.244830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.244839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.245233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.245244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.245587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.245597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.245796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.245806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.246125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.246135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.246435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.246445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.246760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.246770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.247123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.247133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.247434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.247444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.247631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.247642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.247820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.247830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.248148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.248158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.248322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.248332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.248715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.248725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.248772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.248781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.249079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.249089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.249271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.249280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.249608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.249619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.249817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.249827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.250160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.250170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.250380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.250389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.250736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.250745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.251052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.251064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.251393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.251402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.251700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.251709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.251761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.251771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.252043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.252053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.252371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.252381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.252725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.252735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.252918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.252929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.253226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.253235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.253575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.253584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.253878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.253899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.254223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.254232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.254577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.254586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.254879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.741 [2024-07-15 15:36:01.254892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.741 qpair failed and we were unable to recover it. 00:30:51.741 [2024-07-15 15:36:01.255087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.255097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.255279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.255288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.255471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.255481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.255794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.255803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.256191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.256201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.256482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.256492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.256654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.256664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.257003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.257013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.257306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.257315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.257516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.257525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.257725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.257734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.257786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.257796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.257977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.257988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.258277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.258288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.258587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.258596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.258920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.258929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.259339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.259349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.259518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.259528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.259703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.259713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.259906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.259924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.260290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.260299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.260618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.260628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.260792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.260801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.261018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.261028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.261361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.261370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.261722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.261731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.262038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.262050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.262207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.262216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.262494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.262504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.262846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.262856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.263191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.263200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.263388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.263398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.263725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.263735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.264069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.264078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.742 [2024-07-15 15:36:01.264245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.742 [2024-07-15 15:36:01.264254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.742 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.264618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.264628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.265026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.265036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.265418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.265427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.265729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.265738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.266035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.266044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.266173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.266183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.266431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.266441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.266744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.266754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.267072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.267082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.267423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.267433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.267763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.267772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.268107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.268116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.268315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.268325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.268514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.268524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.268576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.268585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.268888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.268898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.269122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.269131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.269441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.269450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.269749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.269759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.270158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.270168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.270331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.270340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.270725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.270735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.271079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.271089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.271401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.271411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.271798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.271807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.271979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.271990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.272273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.272283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.272576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.272586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.272877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.272890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.273085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.273095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.273474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.273483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.273657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.273667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.273863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.273873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.274062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.274073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.274263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.274272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.274507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.274516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.274839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.274849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.275155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.275165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.275430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.275439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.275746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.743 [2024-07-15 15:36:01.275755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.743 qpair failed and we were unable to recover it. 00:30:51.743 [2024-07-15 15:36:01.275923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.275932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.276236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.276245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.276632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.276642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.277014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.277024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.277344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.277354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.277550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.277560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.277866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.277875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.278163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.278173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.278499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.278508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.278806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.278816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.279139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.279150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.279342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.279352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.279399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.279410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.279696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.279705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.280005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.280015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.280363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.280372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.280671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.280680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.281005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.281015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.281181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.281192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.281422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.281432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.281742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.281752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.282064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.282074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.282284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.282294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.282605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.282615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.282786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.282796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.283089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.283099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.283268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.283279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.283511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.283520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.283925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.283935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.284279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.284289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.284632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.284641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.284958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.284968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.285295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.285305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.285613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.285623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.285789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.285798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.286197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.286207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.286509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.286519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.286607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.286618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.286905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.286915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.287104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.287114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.744 [2024-07-15 15:36:01.287303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.744 [2024-07-15 15:36:01.287312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.744 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.287679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.287688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.288011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.288021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.288349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.288358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.288497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.288506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.288847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.288856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.289026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.289036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.289239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.289249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.289439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.289449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.289738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.289748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.290077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.290086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.290322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.290332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.290667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.290677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.290982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.290993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.291161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.291171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.291410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.291420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.291739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.291748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.291919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.291929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.292214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.292226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.292529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.292539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.292701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.292711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.293093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.293103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.293399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.293408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.293700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.293710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.294028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.294037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.294379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.294389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.294615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.294624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.294810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.294820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.295017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.295026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.295330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.295340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.295688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.295697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.296040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.296050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.296099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.296107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.296401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.296411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.296754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.296765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.297058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.297068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.297391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.297401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.297571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.297581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.297899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.297908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.298101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.745 [2024-07-15 15:36:01.298111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.745 qpair failed and we were unable to recover it. 00:30:51.745 [2024-07-15 15:36:01.298431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.298440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.298633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.298643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.298990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.299000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.299323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.299333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.299634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.299643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.300005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.300015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.300339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.300348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.300691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.300700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.300895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.300913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.301252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.301261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.301559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.301568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.301893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.301903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.302124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.302133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.302440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.302449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.302771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.302780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.302830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.302840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.303181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.303191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.303369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.303379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.303656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.303667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.303856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.303866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.304183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.304193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.304366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.304376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.304560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.304569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.304957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.304967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.305281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.305290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.305467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.305477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.305670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.305679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.305853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.305863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.306169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.306178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.306342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.306351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.306554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.746 [2024-07-15 15:36:01.306563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.746 qpair failed and we were unable to recover it. 00:30:51.746 [2024-07-15 15:36:01.306771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.306781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.307124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.307134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.307312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.307323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.307599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.307609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.307917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.307927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.308236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.308245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.308544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.308553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.308674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.308682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.308992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.309002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.309378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.309387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.309583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.309593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.309927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.309936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.310235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.310244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.310444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.310454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.310661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.310670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.310874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.310892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.311125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.311134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.311321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.311331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.311633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.311642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.311995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.312005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.312336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.312346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.312695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.312705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.313017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.313026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.313335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.313346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.313692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.313702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.314044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.314053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.314351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.314360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.314703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.314715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.315028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.315038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.315225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.315235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.315561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.315570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.315926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.315935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.316232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.316241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.316446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.316455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.316841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.316850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.316979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.316988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.747 [2024-07-15 15:36:01.317271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.747 [2024-07-15 15:36:01.317280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.747 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.317499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.317508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.317797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.317808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.318118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.318127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.318316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.318326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.318648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.318657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.318956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.318965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.319298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.319307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.319608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.319617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.319800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.319810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.320043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.320053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.320364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.320374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.320782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.320791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.320983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.320994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.321327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.321336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.321522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.321539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.321957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.321966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.322271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.322280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.322605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.322614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.322743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.322752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.322928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.322938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.323233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.323243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.323430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.323439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.323721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.323732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.323779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.323789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:51.748 [2024-07-15 15:36:01.324076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.748 [2024-07-15 15:36:01.324085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:51.748 qpair failed and we were unable to recover it. 00:30:52.023 [2024-07-15 15:36:01.324401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.023 [2024-07-15 15:36:01.324412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.023 qpair failed and we were unable to recover it. 00:30:52.023 [2024-07-15 15:36:01.324718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.023 [2024-07-15 15:36:01.324729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.023 qpair failed and we were unable to recover it. 00:30:52.023 [2024-07-15 15:36:01.325057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.023 [2024-07-15 15:36:01.325067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.023 qpair failed and we were unable to recover it. 00:30:52.023 [2024-07-15 15:36:01.325388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.023 [2024-07-15 15:36:01.325397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.023 qpair failed and we were unable to recover it. 00:30:52.023 [2024-07-15 15:36:01.325724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.023 [2024-07-15 15:36:01.325733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.023 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.326068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.326080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.326272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.326282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.326495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.326505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.326816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.326826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.327130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.327140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.327429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.327438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.327718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.327727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.328068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.328079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.328275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.328285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.328685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.328694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.329014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.329023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.329221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.329232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.329592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.329602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.329817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.329827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.330152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.330161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.330482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.330491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.330693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.330704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.331097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.331106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.331407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.331416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.331708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.331717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.332039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.332049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.332378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.332387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.332557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.332567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.332738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.332748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.333071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.333081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.333424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.333433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.333726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.333735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.334029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.334038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.334208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.334218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.334485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.334494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.334686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.334696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.335044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.335054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.335351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.335361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.335655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.335664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.335970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.335979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.336144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.336154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.336381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.336390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.336734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.024 [2024-07-15 15:36:01.336743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.024 qpair failed and we were unable to recover it. 00:30:52.024 [2024-07-15 15:36:01.337051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.337061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.337248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.337257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.337592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.337604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.337788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.337798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.337991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.338001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.338310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.338320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.338508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.338517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.338686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.338695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.339055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.339065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.339265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.339275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.339611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.339621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.339920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.339930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.340122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.340132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.340377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.340386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.340748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.340757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.341102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.341112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.341504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.341513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.341817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.341827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.341911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.341921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.342101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.342110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.342422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.342431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.342631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.342640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.342941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.342951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.343274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.343284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.343655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.343664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.343985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.343994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.344312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.344323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.344536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.344545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.344726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.344736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.344944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.344954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.345232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.345242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.345427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.345437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.345832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.345842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.346014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.346025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.346346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.346356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.346660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.346670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.346970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.346980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.347194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.347204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.347385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.347395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.347581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.025 [2024-07-15 15:36:01.347590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.025 qpair failed and we were unable to recover it. 00:30:52.025 [2024-07-15 15:36:01.347869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.347879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.348102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.348113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.348341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.348353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.348623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.348632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.348950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.348960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.349262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.349272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.349356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.349365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.349534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.349544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.349803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.349813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.350042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.350052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.350379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.350388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.350773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.350783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.351081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.351091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.351284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.351295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.351493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.351505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.351828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.351839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.352138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.352149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.352454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.352464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.352850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.352860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.353167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.353176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.353363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.353373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.353722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.353731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.353779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.353788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.354102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.354112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.354304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.354314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.354555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.354564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.354755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.354766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.355057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.355066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.355375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.355384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.355707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.355717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.356060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.356070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.356382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.356392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.356707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.356716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.357146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.357155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.357465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.357474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.357788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.357798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.358128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.358138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.358481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.358491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.358666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.026 [2024-07-15 15:36:01.358675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.026 qpair failed and we were unable to recover it. 00:30:52.026 [2024-07-15 15:36:01.358991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.359001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.359333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.359342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.359661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.359670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.359838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.359850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.360203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.360215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.360506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.360516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.360808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.360818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.361161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.361171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.361392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.361402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.361591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.361600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.361769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.361779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.361964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.361974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.362177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.362187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.362379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.362389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.362742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.362751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.363138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.363148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.363457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.363467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.363793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.363803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.364062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.364072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.364425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.364434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.364750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.364759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.365107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.365118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.365282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.365292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.365581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.365591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.365950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.365960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.366265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.366274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.366513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.366522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.366690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.366700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.367009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.367020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.367351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.367361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.367663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.367672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.367850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.367860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.368057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.368067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.368359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.368368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.368420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.368429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.368584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.368593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.368942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.368952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.369271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.369281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.369580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.027 [2024-07-15 15:36:01.369589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.027 qpair failed and we were unable to recover it. 00:30:52.027 [2024-07-15 15:36:01.369916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.369925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.370212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.370222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.370387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.370397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.370698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.370708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.371017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.371030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.371215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.371226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.371531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.371541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.371877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.371892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.372243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.372253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.372566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.372576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.372925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.372936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.373290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.373300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.373493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.373504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.373851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.373860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.374212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.374223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.374547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.374557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.374770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.374780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.375080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.375090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.375386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.375396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.375575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.375584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.375824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.375833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.376238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.376248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.376554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.376564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.376861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.376871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.377183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.377193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.377364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.377374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.377671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.377682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.378021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.378031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.378188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.378198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.378561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.378571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.378736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.378746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.379031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.379041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.028 [2024-07-15 15:36:01.379220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.028 [2024-07-15 15:36:01.379230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.028 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.379407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.379416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.379861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.379871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.380170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.380180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.380224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.380233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.380395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.380404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.380781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.380790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.380962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.380973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.381300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.381309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.381646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.381655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.381957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.381966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.382140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.382149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.382381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.382393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.382687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.382696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.383021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.383031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.383351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.383360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.383533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.383543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.383756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.383766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.383949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.383960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.384249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.384258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.384576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.384585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.384858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.384867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.385046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.385056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.385347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.385357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.385527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.385537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.385847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.385856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.386179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.386188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.386492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.386502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.386898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.386908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.387225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.387234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.387538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.387547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.387854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.387864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.388249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.388259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.388535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.388545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.388866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.388876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.389222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.389232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.389439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.389449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.389730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.389739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.390061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.390071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.390366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.390375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.390715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.029 [2024-07-15 15:36:01.390724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.029 qpair failed and we were unable to recover it. 00:30:52.029 [2024-07-15 15:36:01.390951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.390961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.391313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.391322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.391629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.391640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.391831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.391840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.392169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.392179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.392500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.392509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.392838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.392848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.393160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.393170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.393223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.393232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.393584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.393593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.393908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.393918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.394225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.394237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.394458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.394467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.394650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.394659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.395007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.395018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.395364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.395374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.395767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.395777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.396076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.396086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.396408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.396418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.396667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.396676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.397013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.397023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.397366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.397375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.397700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.397709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.398030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.398040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.398223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.398233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.398553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.398563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.398857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.398866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.399162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.399172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.399490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.399500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.399820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.399829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.400125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.400134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.400338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.400347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.400526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.400536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.400849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.400859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.401169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.401179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.401482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.401493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.401805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.401815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.402152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.402162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.402467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.402476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.402863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.402873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.030 [2024-07-15 15:36:01.403189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.030 [2024-07-15 15:36:01.403199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.030 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.403497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.403507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.403811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.403821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.404022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.404032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.404346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.404355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.404535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.404545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.404890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.404901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.405099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.405109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.405456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.405465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.405781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.405790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.406113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.406124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.406489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.406501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.406699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.406710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.406790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.406800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.407093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.407102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.407287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.407296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.407486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.407497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.407808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.407818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.408004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.408014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.408315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.408325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.408519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.408528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.408866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.408875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.409070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.409079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.409422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.409431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.409729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.409739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.409952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.409962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.410296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.410306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.410629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.410639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.410958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.410968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.411301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.411311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.411510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.411519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.411715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.411726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.411918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.411928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.412302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.412312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.412505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.412515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.412816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.412825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.413016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.413026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.413100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.413110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.413304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.413312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.413612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.413622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.413800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.413810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.031 qpair failed and we were unable to recover it. 00:30:52.031 [2024-07-15 15:36:01.414100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.031 [2024-07-15 15:36:01.414110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.414454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.414464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.414681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.414690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.414989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.414999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.415171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.415180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.415514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.415523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.415721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.415731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.415928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.415938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.416282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.416293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.416472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.416482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.416844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.416853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.417239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.417248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.417545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.417555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.417748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.417758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.418114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.418124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.418324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.418335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.418540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.418550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.418922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.418932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.419208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.419218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.419375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.419385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.419708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.419718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.420079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.420090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.420272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.420282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.420470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.420480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.420563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.420573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.420671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.420681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.420857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.420867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.421166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.421176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.421495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.421505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.421809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.421818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.422149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.422159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.422434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.422444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.422762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.422772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.423062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.423071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.423267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.423277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.032 [2024-07-15 15:36:01.423564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.032 [2024-07-15 15:36:01.423573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.032 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.423961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.423972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.424294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.424306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.424614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.424623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.424823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.424833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.425176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.425185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.425501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.425510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.425822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.425831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.426122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.426131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.426423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.426432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.426738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.426747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.426918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.426927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.427210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.427219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.427518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.427528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.427711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.427721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.428047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.428057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.428403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.428413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.428762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.428771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.429152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.429162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.429499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.429508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.429812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.429822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.430091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.430101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.430413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.430422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.430618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.430627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.430932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.430942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.430992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.431001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.431221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.431230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.431578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.431587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.431886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.431896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.432283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.432293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.432640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.432650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.432997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.433007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.433353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.433362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.433683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.433693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.434000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.434010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.434320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.434329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.434702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.434712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.435008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.435018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.435308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.435318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.435651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.435661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.435955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.033 [2024-07-15 15:36:01.435965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.033 qpair failed and we were unable to recover it. 00:30:52.033 [2024-07-15 15:36:01.436275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.436285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.436500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.436511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.436561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.436572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.436858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.436867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.437208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.437218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.437521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.437530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.437703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.437714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.437996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.438006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.438311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.438321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.438675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.438684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.439050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.439060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.439355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.439364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.439667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.439677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.439968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.439978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.440355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.440365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.440556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.440566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.440802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.440812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.440862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.440873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.440924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.440934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.441125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.441134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.441506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.441516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.441862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.441872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.442107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.442117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.442418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.442427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.442580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.442591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.442935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.442945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.443291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.443301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.443626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.443636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.443952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.443962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.444169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.444180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.444397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.444406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.444611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.444622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.444961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.444972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.445325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.445334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.445651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.445660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.445954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.445964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.446250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.446260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.446594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.446603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.446921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.446931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.447228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.447238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.034 [2024-07-15 15:36:01.447578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.034 [2024-07-15 15:36:01.447588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.034 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.447676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.447690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.447900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.447911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.448223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.448233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.448560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.448570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.448784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.448793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.449092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.449102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.449297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.449306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.449660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.449670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.449978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.449988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.450035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.450045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.450263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.450273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.450447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.450457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.450786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.450795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.450985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.450996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.451371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.451380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.451574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.451585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.451632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.451642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.451832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.451841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.451986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.451996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.452208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.452218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.452533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.452543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.452844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.452854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.453272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.453281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.453452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.453462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.453772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.453782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.454102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.454111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.454410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.454420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.454716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.454726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.454897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.454907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.455279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.455288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.455585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.455595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.455900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.455910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.456222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.456232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.456435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.456445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.456801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.456810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.457122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.457131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.457471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.457482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.457687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.457697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.457892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.457902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.458065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.035 [2024-07-15 15:36:01.458075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.035 qpair failed and we were unable to recover it. 00:30:52.035 [2024-07-15 15:36:01.458356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.458368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.458693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.458703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.458874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.458889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.459201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.459210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.459500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.459511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.459817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.459827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.460162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.460172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.460475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.460485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.460875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.460887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.461209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.461218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.461598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.461607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.461946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.461956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.462288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.462297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.462603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.462613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.462959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.462968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.463147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.463157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.463541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.463550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.463845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.463855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.464156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.464167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.464459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.464469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.464662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.464673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.465001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.465011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.465308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.465318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.465518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.465528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.465875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.465896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.466065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.466075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.466121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.466130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.466318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.466327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.466658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.466668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.467010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.467020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.467326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.467336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.467652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.467662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.467958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.467969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.468311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.468320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.468479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.036 [2024-07-15 15:36:01.468488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.036 qpair failed and we were unable to recover it. 00:30:52.036 [2024-07-15 15:36:01.468714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.468724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.469063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.469073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.469407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.469417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.469600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.469609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.469957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.469967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.470338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.470349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.470518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.470527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.470647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.470657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.470990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.471000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.471182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.471191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.471497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.471507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.471807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.471817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.472231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.472242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.472553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.472562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.472891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.472901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.473200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.473210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.473373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.473383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.473611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.473621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.473967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.473977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.474296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.474306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.474617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.474626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.474954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.474965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.475264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.475274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.475596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.475606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.475912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.475921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.476264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.476274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.476593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.476603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.476895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.476905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.477225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.477235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.477552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.477561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.477866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.477876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.478175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.478184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.478353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.478363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.478595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.478604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.478791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.478802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.479199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.479209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.479449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.479459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.479797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.479806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.480133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.480143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.480437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.480446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.037 qpair failed and we were unable to recover it. 00:30:52.037 [2024-07-15 15:36:01.480774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.037 [2024-07-15 15:36:01.480783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.481152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.481161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.481561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.481572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.481760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.481771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.482087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.482097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.482272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.482284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.482480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.482490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.482820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.482830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.483167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.483177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.483484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.483493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.483678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.483689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.483973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.483982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.484364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.484373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.484682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.484691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.484890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.484899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.485224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.485233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.485551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.485561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.485838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.485847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.486072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.486082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.486432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.486442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.486791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.486801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.487003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.487013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.487312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.487321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.487635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.487644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.487848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.487857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.488132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.488142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.488305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.488315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.488717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.488726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.489052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.489062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.489430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.489439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.489814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.489824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.490134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.490144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.490446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.490455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.490755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.490764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.491169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.491179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.491495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.491504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.491804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.491813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.492004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.492014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.492307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.492316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.492625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.492634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.492828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.038 [2024-07-15 15:36:01.492838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.038 qpair failed and we were unable to recover it. 00:30:52.038 [2024-07-15 15:36:01.493042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.493052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.493372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.493382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.493624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.493634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.493922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.493931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.494242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.494253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.494576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.494586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.494896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.494906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.495254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.495263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.495559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.495568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.495739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.495750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.495916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.495926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.496217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.496226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.496406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.496416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.496749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.496758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.496949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.496967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.497263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.497273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.497618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.497627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.497939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.497949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.498293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.498302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.498354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.498364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.498559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.498568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.498924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.498934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.499108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.499117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.499415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.499424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.499595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.499605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.499927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.499936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.500133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.500143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.500500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.500509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.500712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.500722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.500959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.500968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.501144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.501154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.501536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.501545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.501689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.501699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.502065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.502075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.502263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.502273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.502563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.502572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.502800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.502810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.502901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.502910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.503205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.503214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.503549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.039 [2024-07-15 15:36:01.503558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.039 qpair failed and we were unable to recover it. 00:30:52.039 [2024-07-15 15:36:01.503862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.503871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.504260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.504270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.504485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.504494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.504820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.504830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.505215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.505229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.505559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.505568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.505943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.505952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.506262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.506271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.506584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.506593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.506766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.506776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.507080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.507089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.507311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.507321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.507677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.507686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.507845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.507854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.508064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.508074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.508260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.508270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.508576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.508585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.508881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.508894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.509078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.509089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.509288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.509298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.509671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.509680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.509989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.509998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.510341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.510350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.510679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.510689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.510885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.510896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.511222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.511231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.511557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.511566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.511614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.511623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.511778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.511787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.512111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.512121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.512432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.512441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.512753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.512762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.512963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.512973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.513272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.513282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.513641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.513651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.513999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.514008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.514317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.514327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.514642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.514652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.514992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.515002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.515329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.515339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.040 [2024-07-15 15:36:01.515550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.040 [2024-07-15 15:36:01.515559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.040 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.515905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.515915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.516092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.516102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.516295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.516304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.516623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.516635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.516971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.516981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.517306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.517316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.517471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.517481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:52.041 [2024-07-15 15:36:01.517652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.517664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:52.041 [2024-07-15 15:36:01.517977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.517988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:52.041 [2024-07-15 15:36:01.518304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.518318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.518501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.518511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:52.041 [2024-07-15 15:36:01.518732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.518742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.041 [2024-07-15 15:36:01.518901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.518911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.519201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.519211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.519532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.519542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.519891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.519901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.520222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.520231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.520395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.520405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.520624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.520634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.521046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.521057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.521363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.521374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.521721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.521731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.521928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.521938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.522225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.522235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.522286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.522295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.522700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.522710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.522888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.522899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.523285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.523295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.523662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.523672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.524017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.524027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.524415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.524425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.524634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.041 [2024-07-15 15:36:01.524644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.041 qpair failed and we were unable to recover it. 00:30:52.041 [2024-07-15 15:36:01.524950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.524960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.525137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.525149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.525444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.525454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.525749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.525758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.526089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.526099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.530077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.530113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.530475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.530488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.530544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.530554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.530864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.530876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.531183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.531198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.531504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.531514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.531698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.531708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.532040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.532050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.532373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.532383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.532734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.532744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.532938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.532949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.533324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.533333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.533505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.533516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.533845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.533855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.534030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.534041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.534428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.534437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.534808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.534817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.535141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.535151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.535447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.535457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.535757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.535767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.535886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.535896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.536208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.536217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.536552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.536561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.536710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.536721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.536881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.536893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.537205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.537215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.537514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.537523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.537819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.537829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.538120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.538131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.538417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.538428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.538608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.538617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.538924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.538936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.539235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.539245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.539556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.539566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.539750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.539759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.042 qpair failed and we were unable to recover it. 00:30:52.042 [2024-07-15 15:36:01.539988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.042 [2024-07-15 15:36:01.539999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.540366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.540376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.540671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.540682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.540986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.540996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.541350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.541359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.541710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.541720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.542062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.542071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.542246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.542256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.542520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.542530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.542846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.542855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.543165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.543175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.543475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.543485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.543826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.543835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.544147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.544157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.544527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.544537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.544865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.544875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.545118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.545128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.545462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.545472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.545664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.545673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.545870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.545881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.545939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.545951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.546270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.546279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.546601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.546611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.546955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.546965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.547140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.547149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.547460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.547469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.547818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.547827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.548209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.548219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.548537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.548547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.548873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.548890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.549074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.549084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.549377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.549388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.549580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.549590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.549902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.549912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.550262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.550271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.550572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.550582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.550773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.550785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.551131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.551141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.551493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.551503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.043 [2024-07-15 15:36:01.551864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.043 [2024-07-15 15:36:01.551874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.043 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.552167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.552177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.552494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.552503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.552911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.552920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.553240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.553250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.553467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.553477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.553770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.553780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.554110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.554120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.554506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.554516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.554716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.554727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.554910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.554920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.555242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.555252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.555443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.555455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.555631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.555641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.555944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.555954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.556303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.556314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.556627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.556637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.556982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.556993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.044 [2024-07-15 15:36:01.557287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.557298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:52.044 [2024-07-15 15:36:01.557498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.557510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.044 [2024-07-15 15:36:01.557824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.557835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.044 [2024-07-15 15:36:01.558148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.558158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.558496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.558506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.558810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.558820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.559187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.559198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.559517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.559527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.559712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.559722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.560146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.560156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.560479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.560489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.560876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.560890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.561077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.561087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.561424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.561434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.561774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.561784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.562100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.562110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.562408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.562418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.562602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.562614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.563003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.563012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.563181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.563191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.563586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.563595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.044 qpair failed and we were unable to recover it. 00:30:52.044 [2024-07-15 15:36:01.563946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.044 [2024-07-15 15:36:01.563956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.564270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.564280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.564451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.564461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.564663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.564673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.565039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.565049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.565235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.565245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.565566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.565575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.565738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.565748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.565792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.565802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.565980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.565991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.566186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.566196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.566437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.566446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.566740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.566750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.566936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.566946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.567128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.567138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.567328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.567338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.567655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.567664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.568006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.568017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.568340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.568350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.568653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.568664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.568984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.568994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.569399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.569409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.569592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.569602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.569894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.569904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.570203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.570213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.570399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.570409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.570757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.570766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.571081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.571091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.571436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.571446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.571607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.571618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.572005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.572015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.572356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.572366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.572765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.572782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 Malloc0 00:30:52.045 [2024-07-15 15:36:01.573089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.573100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 [2024-07-15 15:36:01.573301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.045 [2024-07-15 15:36:01.573311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.045 qpair failed and we were unable to recover it. 00:30:52.045 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.046 [2024-07-15 15:36:01.573729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.573740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.573935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.573945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:52.046 [2024-07-15 15:36:01.574272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.574283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.046 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.046 [2024-07-15 15:36:01.574620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.574630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.575652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.575675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.576026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.576037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.576255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.576266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.576598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.576607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.576663] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.046 [2024-07-15 15:36:01.576903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.576914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.577081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.577091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.577146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.577155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.577412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.577421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.577736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.577745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.578037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.578047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.578416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.578426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.578640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.578649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.578862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.578872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.579189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.579200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.579542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.579552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.579900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.579911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.580266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.580275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.580473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.580483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.580790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.580799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.581130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.581140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.581531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.581540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.581865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.581874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.582189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.582200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.582543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.582552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.582887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.582898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.583098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.583109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.583412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.583422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.583615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.583625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.583926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.583936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.584253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.584262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.584454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.584465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.584784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.584797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.585021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.585032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 [2024-07-15 15:36:01.585353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.046 [2024-07-15 15:36:01.585363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.046 qpair failed and we were unable to recover it. 00:30:52.046 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.046 [2024-07-15 15:36:01.585682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.585692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:52.047 [2024-07-15 15:36:01.585994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.586005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.047 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.047 [2024-07-15 15:36:01.586423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.586433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.587338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.587362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.587664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.587676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.588054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.588065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.588460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.588469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.588520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.588529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.588818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.588828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.589234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.589243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.589536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.589546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.589934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.589944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.590260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.590270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.590614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.590624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.590963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.590973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.591273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.591282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.591494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.591503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.591706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.591715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.591958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.591968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.592172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.592182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.592493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.592502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.592841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.592854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.593186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.593196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.047 [2024-07-15 15:36:01.593492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.593503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:52.047 [2024-07-15 15:36:01.593826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.593836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.594081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.594094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.047 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.047 [2024-07-15 15:36:01.594399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.594409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.595164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.595184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.595563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.595574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.595764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.595775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.595957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.595967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.596161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.596170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.596400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.596410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.596733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.596743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.597065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.597075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.597371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.597381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.597576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.597589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.597675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.047 [2024-07-15 15:36:01.597685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.047 qpair failed and we were unable to recover it. 00:30:52.047 [2024-07-15 15:36:01.598024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.598035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.598239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.598249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.598446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.598455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.598800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.598810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.599226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.599235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.599537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.599546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.599728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.599738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.599951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.599961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.600297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.600306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.600658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.600671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.600994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.601004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.601314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.601324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.048 [2024-07-15 15:36:01.601504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.601516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.601867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:52.048 [2024-07-15 15:36:01.601877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.048 [2024-07-15 15:36:01.602265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.602275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.048 [2024-07-15 15:36:01.603306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.603327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.603537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.603548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.603623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.603632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.603856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.603866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.603918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.603931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.604255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.604265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.604494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.604504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.604709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.048 [2024-07-15 15:36:01.604718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2728000b90 with addr=10.0.0.2, port=4420 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 [2024-07-15 15:36:01.604875] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.048 [2024-07-15 15:36:01.607267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.048 [2024-07-15 15:36:01.607362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.048 [2024-07-15 15:36:01.607379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.048 [2024-07-15 15:36:01.607390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.048 [2024-07-15 15:36:01.607397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.048 [2024-07-15 15:36:01.607416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.048 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:52.048 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.048 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.048 [2024-07-15 15:36:01.617209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.048 [2024-07-15 15:36:01.617270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.048 [2024-07-15 15:36:01.617287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.048 [2024-07-15 15:36:01.617294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.048 [2024-07-15 15:36:01.617300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.048 [2024-07-15 15:36:01.617315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.048 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.048 15:36:01 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 901375 00:30:52.048 [2024-07-15 15:36:01.627207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.048 [2024-07-15 15:36:01.627272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.048 [2024-07-15 15:36:01.627288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.048 [2024-07-15 15:36:01.627294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.048 [2024-07-15 15:36:01.627301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.048 [2024-07-15 15:36:01.627315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.048 qpair failed and we were unable to recover it. 00:30:52.316 [2024-07-15 15:36:01.637220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.316 [2024-07-15 15:36:01.637318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.316 [2024-07-15 15:36:01.637333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.316 [2024-07-15 15:36:01.637340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.316 [2024-07-15 15:36:01.637346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.316 [2024-07-15 15:36:01.637360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.316 qpair failed and we were unable to recover it. 00:30:52.316 [2024-07-15 15:36:01.647228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.316 [2024-07-15 15:36:01.647298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.316 [2024-07-15 15:36:01.647313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.316 [2024-07-15 15:36:01.647320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.316 [2024-07-15 15:36:01.647326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.316 [2024-07-15 15:36:01.647340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.316 qpair failed and we were unable to recover it. 00:30:52.316 [2024-07-15 15:36:01.657228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.316 [2024-07-15 15:36:01.657292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.316 [2024-07-15 15:36:01.657306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.316 [2024-07-15 15:36:01.657313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.316 [2024-07-15 15:36:01.657320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.316 [2024-07-15 15:36:01.657333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.316 qpair failed and we were unable to recover it. 00:30:52.316 [2024-07-15 15:36:01.667267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.317 [2024-07-15 15:36:01.667321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.317 [2024-07-15 15:36:01.667336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.317 [2024-07-15 15:36:01.667343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.317 [2024-07-15 15:36:01.667349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.317 [2024-07-15 15:36:01.667363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.317 qpair failed and we were unable to recover it. 00:30:52.317 [2024-07-15 15:36:01.677285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.317 [2024-07-15 15:36:01.677343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.317 [2024-07-15 15:36:01.677358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.317 [2024-07-15 15:36:01.677365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.317 [2024-07-15 15:36:01.677371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.317 [2024-07-15 15:36:01.677385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.317 qpair failed and we were unable to recover it. 00:30:52.317 [2024-07-15 15:36:01.687193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.317 [2024-07-15 15:36:01.687267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.317 [2024-07-15 15:36:01.687283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.317 [2024-07-15 15:36:01.687289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.317 [2024-07-15 15:36:01.687299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.317 [2024-07-15 15:36:01.687313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.317 qpair failed and we were unable to recover it. 00:30:52.317 [2024-07-15 15:36:01.697337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.317 [2024-07-15 15:36:01.697395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.317 [2024-07-15 15:36:01.697410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.317 [2024-07-15 15:36:01.697416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.317 [2024-07-15 15:36:01.697422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.317 [2024-07-15 15:36:01.697436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.317 qpair failed and we were unable to recover it. 00:30:52.317 [2024-07-15 15:36:01.707365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.317 [2024-07-15 15:36:01.707419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.317 [2024-07-15 15:36:01.707434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.317 [2024-07-15 15:36:01.707441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.317 [2024-07-15 15:36:01.707446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.317 [2024-07-15 15:36:01.707460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.317 qpair failed and we were unable to recover it. 00:30:52.317 [2024-07-15 15:36:01.717384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.317 [2024-07-15 15:36:01.717439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.317 [2024-07-15 15:36:01.717454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.317 [2024-07-15 15:36:01.717461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.317 [2024-07-15 15:36:01.717467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.317 [2024-07-15 15:36:01.717481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.317 qpair failed and we were unable to recover it. 00:30:52.317 [2024-07-15 15:36:01.727471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.317 [2024-07-15 15:36:01.727530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.317 [2024-07-15 15:36:01.727544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.317 [2024-07-15 15:36:01.727551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.317 [2024-07-15 15:36:01.727557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.317 [2024-07-15 15:36:01.727570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.317 qpair failed and we were unable to recover it. 00:30:52.317 [2024-07-15 15:36:01.737358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.317 [2024-07-15 15:36:01.737416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.317 [2024-07-15 15:36:01.737432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.317 [2024-07-15 15:36:01.737439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.317 [2024-07-15 15:36:01.737445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.317 [2024-07-15 15:36:01.737460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.317 qpair failed and we were unable to recover it. 00:30:52.317 [2024-07-15 15:36:01.747501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.317 [2024-07-15 15:36:01.747556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.317 [2024-07-15 15:36:01.747571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.317 [2024-07-15 15:36:01.747578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.317 [2024-07-15 15:36:01.747584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.317 [2024-07-15 15:36:01.747598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.317 qpair failed and we were unable to recover it. 00:30:52.317 [2024-07-15 15:36:01.757493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.317 [2024-07-15 15:36:01.757546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.317 [2024-07-15 15:36:01.757561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.317 [2024-07-15 15:36:01.757567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.317 [2024-07-15 15:36:01.757573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.317 [2024-07-15 15:36:01.757587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.317 qpair failed and we were unable to recover it. 00:30:52.317 [2024-07-15 15:36:01.767571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.317 [2024-07-15 15:36:01.767629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.317 [2024-07-15 15:36:01.767644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.317 [2024-07-15 15:36:01.767651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.318 [2024-07-15 15:36:01.767657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.318 [2024-07-15 15:36:01.767670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.318 qpair failed and we were unable to recover it. 00:30:52.318 [2024-07-15 15:36:01.777575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.318 [2024-07-15 15:36:01.777633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.318 [2024-07-15 15:36:01.777647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.318 [2024-07-15 15:36:01.777658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.318 [2024-07-15 15:36:01.777664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.318 [2024-07-15 15:36:01.777678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.318 qpair failed and we were unable to recover it. 00:30:52.318 [2024-07-15 15:36:01.787598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.318 [2024-07-15 15:36:01.787652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.318 [2024-07-15 15:36:01.787667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.318 [2024-07-15 15:36:01.787674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.318 [2024-07-15 15:36:01.787680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.318 [2024-07-15 15:36:01.787694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.318 qpair failed and we were unable to recover it. 00:30:52.318 [2024-07-15 15:36:01.797625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.318 [2024-07-15 15:36:01.797681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.318 [2024-07-15 15:36:01.797695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.318 [2024-07-15 15:36:01.797702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.318 [2024-07-15 15:36:01.797708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.318 [2024-07-15 15:36:01.797722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.318 qpair failed and we were unable to recover it. 00:30:52.318 [2024-07-15 15:36:01.807669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.318 [2024-07-15 15:36:01.807729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.318 [2024-07-15 15:36:01.807744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.318 [2024-07-15 15:36:01.807751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.318 [2024-07-15 15:36:01.807757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.318 [2024-07-15 15:36:01.807771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.318 qpair failed and we were unable to recover it. 00:30:52.318 [2024-07-15 15:36:01.817697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.318 [2024-07-15 15:36:01.817749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.318 [2024-07-15 15:36:01.817764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.318 [2024-07-15 15:36:01.817770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.318 [2024-07-15 15:36:01.817776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.318 [2024-07-15 15:36:01.817790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.318 qpair failed and we were unable to recover it. 00:30:52.318 [2024-07-15 15:36:01.827762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.318 [2024-07-15 15:36:01.827814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.318 [2024-07-15 15:36:01.827829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.318 [2024-07-15 15:36:01.827835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.318 [2024-07-15 15:36:01.827841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.318 [2024-07-15 15:36:01.827855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.318 qpair failed and we were unable to recover it. 00:30:52.318 [2024-07-15 15:36:01.837727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.318 [2024-07-15 15:36:01.837787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.318 [2024-07-15 15:36:01.837801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.318 [2024-07-15 15:36:01.837808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.318 [2024-07-15 15:36:01.837814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.318 [2024-07-15 15:36:01.837827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.318 qpair failed and we were unable to recover it. 00:30:52.318 [2024-07-15 15:36:01.847762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.318 [2024-07-15 15:36:01.847830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.318 [2024-07-15 15:36:01.847844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.318 [2024-07-15 15:36:01.847851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.318 [2024-07-15 15:36:01.847857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.318 [2024-07-15 15:36:01.847870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.318 qpair failed and we were unable to recover it. 00:30:52.318 [2024-07-15 15:36:01.857795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.318 [2024-07-15 15:36:01.857847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.318 [2024-07-15 15:36:01.857861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.318 [2024-07-15 15:36:01.857868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.318 [2024-07-15 15:36:01.857874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.318 [2024-07-15 15:36:01.857891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.318 qpair failed and we were unable to recover it. 00:30:52.318 [2024-07-15 15:36:01.867806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.318 [2024-07-15 15:36:01.867857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.318 [2024-07-15 15:36:01.867875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.318 [2024-07-15 15:36:01.867882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.318 [2024-07-15 15:36:01.867893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.318 [2024-07-15 15:36:01.867906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.318 qpair failed and we were unable to recover it. 00:30:52.318 [2024-07-15 15:36:01.877714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.319 [2024-07-15 15:36:01.877813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.319 [2024-07-15 15:36:01.877827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.319 [2024-07-15 15:36:01.877835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.319 [2024-07-15 15:36:01.877841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.319 [2024-07-15 15:36:01.877854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.319 qpair failed and we were unable to recover it. 00:30:52.319 [2024-07-15 15:36:01.887897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.319 [2024-07-15 15:36:01.887955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.319 [2024-07-15 15:36:01.887970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.319 [2024-07-15 15:36:01.887977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.319 [2024-07-15 15:36:01.887983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.319 [2024-07-15 15:36:01.887996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.319 qpair failed and we were unable to recover it. 00:30:52.319 [2024-07-15 15:36:01.897898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.319 [2024-07-15 15:36:01.897961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.319 [2024-07-15 15:36:01.897975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.319 [2024-07-15 15:36:01.897982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.319 [2024-07-15 15:36:01.897988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.319 [2024-07-15 15:36:01.898002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.319 qpair failed and we were unable to recover it. 00:30:52.319 [2024-07-15 15:36:01.908612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.319 [2024-07-15 15:36:01.908684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.319 [2024-07-15 15:36:01.908699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.319 [2024-07-15 15:36:01.908706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.319 [2024-07-15 15:36:01.908712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.319 [2024-07-15 15:36:01.908729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.319 qpair failed and we were unable to recover it. 00:30:52.319 [2024-07-15 15:36:01.917992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.319 [2024-07-15 15:36:01.918057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.319 [2024-07-15 15:36:01.918071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.319 [2024-07-15 15:36:01.918078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.319 [2024-07-15 15:36:01.918084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.319 [2024-07-15 15:36:01.918097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.319 qpair failed and we were unable to recover it. 00:30:52.319 [2024-07-15 15:36:01.928007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.319 [2024-07-15 15:36:01.928106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.319 [2024-07-15 15:36:01.928122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.319 [2024-07-15 15:36:01.928129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.319 [2024-07-15 15:36:01.928138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.319 [2024-07-15 15:36:01.928153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.319 qpair failed and we were unable to recover it. 00:30:52.582 [2024-07-15 15:36:01.938021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.582 [2024-07-15 15:36:01.938072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.582 [2024-07-15 15:36:01.938088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.582 [2024-07-15 15:36:01.938095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.582 [2024-07-15 15:36:01.938101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.582 [2024-07-15 15:36:01.938115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.582 qpair failed and we were unable to recover it. 00:30:52.582 [2024-07-15 15:36:01.948023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.582 [2024-07-15 15:36:01.948091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.582 [2024-07-15 15:36:01.948105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.582 [2024-07-15 15:36:01.948112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.582 [2024-07-15 15:36:01.948118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.582 [2024-07-15 15:36:01.948132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.582 qpair failed and we were unable to recover it. 00:30:52.582 [2024-07-15 15:36:01.958033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.582 [2024-07-15 15:36:01.958090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.582 [2024-07-15 15:36:01.958108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.582 [2024-07-15 15:36:01.958115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.582 [2024-07-15 15:36:01.958122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.582 [2024-07-15 15:36:01.958136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.582 qpair failed and we were unable to recover it. 00:30:52.582 [2024-07-15 15:36:01.968065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.582 [2024-07-15 15:36:01.968156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.582 [2024-07-15 15:36:01.968171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.582 [2024-07-15 15:36:01.968177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.582 [2024-07-15 15:36:01.968183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.582 [2024-07-15 15:36:01.968197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.582 qpair failed and we were unable to recover it. 00:30:52.582 [2024-07-15 15:36:01.978074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.582 [2024-07-15 15:36:01.978129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.582 [2024-07-15 15:36:01.978143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.582 [2024-07-15 15:36:01.978150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.582 [2024-07-15 15:36:01.978156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.582 [2024-07-15 15:36:01.978170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.582 qpair failed and we were unable to recover it. 00:30:52.582 [2024-07-15 15:36:01.988124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.582 [2024-07-15 15:36:01.988177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.582 [2024-07-15 15:36:01.988191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.582 [2024-07-15 15:36:01.988198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.582 [2024-07-15 15:36:01.988204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.582 [2024-07-15 15:36:01.988217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.582 qpair failed and we were unable to recover it. 00:30:52.582 [2024-07-15 15:36:01.998179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.582 [2024-07-15 15:36:01.998233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.582 [2024-07-15 15:36:01.998248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.582 [2024-07-15 15:36:01.998254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.582 [2024-07-15 15:36:01.998267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.582 [2024-07-15 15:36:01.998281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.582 qpair failed and we were unable to recover it. 00:30:52.582 [2024-07-15 15:36:02.008053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.582 [2024-07-15 15:36:02.008114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.582 [2024-07-15 15:36:02.008129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.582 [2024-07-15 15:36:02.008136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.582 [2024-07-15 15:36:02.008142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.582 [2024-07-15 15:36:02.008162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.582 qpair failed and we were unable to recover it. 00:30:52.582 [2024-07-15 15:36:02.018218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.582 [2024-07-15 15:36:02.018270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.582 [2024-07-15 15:36:02.018286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.582 [2024-07-15 15:36:02.018293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.583 [2024-07-15 15:36:02.018299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.583 [2024-07-15 15:36:02.018313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.583 qpair failed and we were unable to recover it. 00:30:52.583 [2024-07-15 15:36:02.028220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.583 [2024-07-15 15:36:02.028281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.583 [2024-07-15 15:36:02.028295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.583 [2024-07-15 15:36:02.028302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.583 [2024-07-15 15:36:02.028309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.583 [2024-07-15 15:36:02.028322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.583 qpair failed and we were unable to recover it. 00:30:52.583 [2024-07-15 15:36:02.038274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.583 [2024-07-15 15:36:02.038338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.583 [2024-07-15 15:36:02.038352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.583 [2024-07-15 15:36:02.038359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.583 [2024-07-15 15:36:02.038365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.583 [2024-07-15 15:36:02.038379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.583 qpair failed and we were unable to recover it. 00:30:52.583 [2024-07-15 15:36:02.048282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.583 [2024-07-15 15:36:02.048347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.583 [2024-07-15 15:36:02.048361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.583 [2024-07-15 15:36:02.048368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.583 [2024-07-15 15:36:02.048374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.583 [2024-07-15 15:36:02.048388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.583 qpair failed and we were unable to recover it. 00:30:52.583 [2024-07-15 15:36:02.058326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.583 [2024-07-15 15:36:02.058378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.583 [2024-07-15 15:36:02.058392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.583 [2024-07-15 15:36:02.058399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.583 [2024-07-15 15:36:02.058405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.583 [2024-07-15 15:36:02.058418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.583 qpair failed and we were unable to recover it. 00:30:52.583 [2024-07-15 15:36:02.068264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.583 [2024-07-15 15:36:02.068360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.583 [2024-07-15 15:36:02.068374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.583 [2024-07-15 15:36:02.068381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.583 [2024-07-15 15:36:02.068387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.583 [2024-07-15 15:36:02.068400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.583 qpair failed and we were unable to recover it. 00:30:52.583 [2024-07-15 15:36:02.078373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.583 [2024-07-15 15:36:02.078429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.583 [2024-07-15 15:36:02.078443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.583 [2024-07-15 15:36:02.078450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.583 [2024-07-15 15:36:02.078456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.583 [2024-07-15 15:36:02.078469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.583 qpair failed and we were unable to recover it. 00:30:52.583 [2024-07-15 15:36:02.088390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.583 [2024-07-15 15:36:02.088454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.583 [2024-07-15 15:36:02.088468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.583 [2024-07-15 15:36:02.088475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.583 [2024-07-15 15:36:02.088485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.583 [2024-07-15 15:36:02.088498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.583 qpair failed and we were unable to recover it. 00:30:52.583 [2024-07-15 15:36:02.098429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.583 [2024-07-15 15:36:02.098481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.583 [2024-07-15 15:36:02.098495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.583 [2024-07-15 15:36:02.098502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.583 [2024-07-15 15:36:02.098508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.583 [2024-07-15 15:36:02.098522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.583 qpair failed and we were unable to recover it. 00:30:52.583 [2024-07-15 15:36:02.108459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.583 [2024-07-15 15:36:02.108511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.583 [2024-07-15 15:36:02.108525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.583 [2024-07-15 15:36:02.108532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.583 [2024-07-15 15:36:02.108538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.583 [2024-07-15 15:36:02.108552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.583 qpair failed and we were unable to recover it. 00:30:52.583 [2024-07-15 15:36:02.118370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.583 [2024-07-15 15:36:02.118434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.583 [2024-07-15 15:36:02.118449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.583 [2024-07-15 15:36:02.118455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.583 [2024-07-15 15:36:02.118462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.583 [2024-07-15 15:36:02.118475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.583 qpair failed and we were unable to recover it. 00:30:52.583 [2024-07-15 15:36:02.128516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.583 [2024-07-15 15:36:02.128630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.583 [2024-07-15 15:36:02.128646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.583 [2024-07-15 15:36:02.128655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.583 [2024-07-15 15:36:02.128662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.583 [2024-07-15 15:36:02.128677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.583 qpair failed and we were unable to recover it. 00:30:52.583 [2024-07-15 15:36:02.138550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.583 [2024-07-15 15:36:02.138608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.583 [2024-07-15 15:36:02.138623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.583 [2024-07-15 15:36:02.138630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.583 [2024-07-15 15:36:02.138636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.583 [2024-07-15 15:36:02.138650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.583 qpair failed and we were unable to recover it. 00:30:52.583 [2024-07-15 15:36:02.148561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.583 [2024-07-15 15:36:02.148634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.583 [2024-07-15 15:36:02.148648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.583 [2024-07-15 15:36:02.148655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.583 [2024-07-15 15:36:02.148661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.583 [2024-07-15 15:36:02.148674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.583 qpair failed and we were unable to recover it. 00:30:52.583 [2024-07-15 15:36:02.158595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.583 [2024-07-15 15:36:02.158647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.583 [2024-07-15 15:36:02.158661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.583 [2024-07-15 15:36:02.158668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.584 [2024-07-15 15:36:02.158674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.584 [2024-07-15 15:36:02.158687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.584 qpair failed and we were unable to recover it. 00:30:52.584 [2024-07-15 15:36:02.168634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.584 [2024-07-15 15:36:02.168695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.584 [2024-07-15 15:36:02.168709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.584 [2024-07-15 15:36:02.168716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.584 [2024-07-15 15:36:02.168722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.584 [2024-07-15 15:36:02.168735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.584 qpair failed and we were unable to recover it. 00:30:52.584 [2024-07-15 15:36:02.178660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.584 [2024-07-15 15:36:02.178719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.584 [2024-07-15 15:36:02.178733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.584 [2024-07-15 15:36:02.178743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.584 [2024-07-15 15:36:02.178749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.584 [2024-07-15 15:36:02.178764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.584 qpair failed and we were unable to recover it. 00:30:52.584 [2024-07-15 15:36:02.188687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.584 [2024-07-15 15:36:02.188736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.584 [2024-07-15 15:36:02.188751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.584 [2024-07-15 15:36:02.188758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.584 [2024-07-15 15:36:02.188764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.584 [2024-07-15 15:36:02.188777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.584 qpair failed and we were unable to recover it. 00:30:52.584 [2024-07-15 15:36:02.198697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.584 [2024-07-15 15:36:02.198754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.584 [2024-07-15 15:36:02.198768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.584 [2024-07-15 15:36:02.198775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.584 [2024-07-15 15:36:02.198781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.584 [2024-07-15 15:36:02.198794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.584 qpair failed and we were unable to recover it. 00:30:52.847 [2024-07-15 15:36:02.208728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.847 [2024-07-15 15:36:02.208786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.847 [2024-07-15 15:36:02.208800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.847 [2024-07-15 15:36:02.208807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.847 [2024-07-15 15:36:02.208813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.847 [2024-07-15 15:36:02.208826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.847 qpair failed and we were unable to recover it. 00:30:52.847 [2024-07-15 15:36:02.218761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.847 [2024-07-15 15:36:02.218817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.847 [2024-07-15 15:36:02.218831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.847 [2024-07-15 15:36:02.218838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.847 [2024-07-15 15:36:02.218844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.847 [2024-07-15 15:36:02.218859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.847 qpair failed and we were unable to recover it. 00:30:52.847 [2024-07-15 15:36:02.228795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.847 [2024-07-15 15:36:02.228854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.847 [2024-07-15 15:36:02.228868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.847 [2024-07-15 15:36:02.228875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.847 [2024-07-15 15:36:02.228881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.847 [2024-07-15 15:36:02.228899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.847 qpair failed and we were unable to recover it. 00:30:52.847 [2024-07-15 15:36:02.238823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.847 [2024-07-15 15:36:02.238880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.847 [2024-07-15 15:36:02.238899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.847 [2024-07-15 15:36:02.238906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.847 [2024-07-15 15:36:02.238912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.847 [2024-07-15 15:36:02.238926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.847 qpair failed and we were unable to recover it. 00:30:52.847 [2024-07-15 15:36:02.248807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.847 [2024-07-15 15:36:02.248870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.847 [2024-07-15 15:36:02.248888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.847 [2024-07-15 15:36:02.248895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.847 [2024-07-15 15:36:02.248901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.847 [2024-07-15 15:36:02.248915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.847 qpair failed and we were unable to recover it. 00:30:52.847 [2024-07-15 15:36:02.258868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.847 [2024-07-15 15:36:02.258923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.847 [2024-07-15 15:36:02.258938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.847 [2024-07-15 15:36:02.258944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.847 [2024-07-15 15:36:02.258950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.847 [2024-07-15 15:36:02.258964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.847 qpair failed and we were unable to recover it. 00:30:52.847 [2024-07-15 15:36:02.268897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.847 [2024-07-15 15:36:02.268948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.847 [2024-07-15 15:36:02.268965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.847 [2024-07-15 15:36:02.268972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.847 [2024-07-15 15:36:02.268978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.847 [2024-07-15 15:36:02.268992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.847 qpair failed and we were unable to recover it. 00:30:52.847 [2024-07-15 15:36:02.278927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.847 [2024-07-15 15:36:02.279010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.847 [2024-07-15 15:36:02.279024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.847 [2024-07-15 15:36:02.279031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.847 [2024-07-15 15:36:02.279037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.847 [2024-07-15 15:36:02.279051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.847 qpair failed and we were unable to recover it. 00:30:52.847 [2024-07-15 15:36:02.288845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.847 [2024-07-15 15:36:02.288944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.847 [2024-07-15 15:36:02.288959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.847 [2024-07-15 15:36:02.288966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.847 [2024-07-15 15:36:02.288972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.847 [2024-07-15 15:36:02.288992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.847 qpair failed and we were unable to recover it. 00:30:52.847 [2024-07-15 15:36:02.298962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.847 [2024-07-15 15:36:02.299023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.847 [2024-07-15 15:36:02.299038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.848 [2024-07-15 15:36:02.299045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.848 [2024-07-15 15:36:02.299051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.848 [2024-07-15 15:36:02.299065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.848 qpair failed and we were unable to recover it. 00:30:52.848 [2024-07-15 15:36:02.309012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.848 [2024-07-15 15:36:02.309067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.848 [2024-07-15 15:36:02.309082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.848 [2024-07-15 15:36:02.309088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.848 [2024-07-15 15:36:02.309094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.848 [2024-07-15 15:36:02.309112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.848 qpair failed and we were unable to recover it. 00:30:52.848 [2024-07-15 15:36:02.318920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.848 [2024-07-15 15:36:02.318975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.848 [2024-07-15 15:36:02.318990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.848 [2024-07-15 15:36:02.318997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.848 [2024-07-15 15:36:02.319003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.848 [2024-07-15 15:36:02.319016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.848 qpair failed and we were unable to recover it. 00:30:52.848 [2024-07-15 15:36:02.329081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.848 [2024-07-15 15:36:02.329142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.848 [2024-07-15 15:36:02.329156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.848 [2024-07-15 15:36:02.329163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.848 [2024-07-15 15:36:02.329168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.848 [2024-07-15 15:36:02.329182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.848 qpair failed and we were unable to recover it. 00:30:52.848 [2024-07-15 15:36:02.338985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.848 [2024-07-15 15:36:02.339047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.848 [2024-07-15 15:36:02.339062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.848 [2024-07-15 15:36:02.339068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.848 [2024-07-15 15:36:02.339075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.848 [2024-07-15 15:36:02.339088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.848 qpair failed and we were unable to recover it. 00:30:52.848 [2024-07-15 15:36:02.349115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.848 [2024-07-15 15:36:02.349176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.848 [2024-07-15 15:36:02.349191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.848 [2024-07-15 15:36:02.349197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.848 [2024-07-15 15:36:02.349203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.848 [2024-07-15 15:36:02.349217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.848 qpair failed and we were unable to recover it. 00:30:52.848 [2024-07-15 15:36:02.359157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.848 [2024-07-15 15:36:02.359210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.848 [2024-07-15 15:36:02.359228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.848 [2024-07-15 15:36:02.359235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.848 [2024-07-15 15:36:02.359240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.848 [2024-07-15 15:36:02.359254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.848 qpair failed and we were unable to recover it. 00:30:52.848 [2024-07-15 15:36:02.369059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.848 [2024-07-15 15:36:02.369119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.848 [2024-07-15 15:36:02.369133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.848 [2024-07-15 15:36:02.369140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.848 [2024-07-15 15:36:02.369146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.848 [2024-07-15 15:36:02.369159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.848 qpair failed and we were unable to recover it. 00:30:52.848 [2024-07-15 15:36:02.379209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.848 [2024-07-15 15:36:02.379264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.848 [2024-07-15 15:36:02.379278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.848 [2024-07-15 15:36:02.379285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.848 [2024-07-15 15:36:02.379291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.848 [2024-07-15 15:36:02.379304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.848 qpair failed and we were unable to recover it. 00:30:52.848 [2024-07-15 15:36:02.389225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.848 [2024-07-15 15:36:02.389297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.848 [2024-07-15 15:36:02.389311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.848 [2024-07-15 15:36:02.389318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.848 [2024-07-15 15:36:02.389323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.848 [2024-07-15 15:36:02.389337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.848 qpair failed and we were unable to recover it. 00:30:52.848 [2024-07-15 15:36:02.399270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.848 [2024-07-15 15:36:02.399370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.848 [2024-07-15 15:36:02.399384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.848 [2024-07-15 15:36:02.399391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.848 [2024-07-15 15:36:02.399397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.848 [2024-07-15 15:36:02.399414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.848 qpair failed and we were unable to recover it. 00:30:52.848 [2024-07-15 15:36:02.409285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.848 [2024-07-15 15:36:02.409342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.848 [2024-07-15 15:36:02.409356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.848 [2024-07-15 15:36:02.409363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.848 [2024-07-15 15:36:02.409369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.848 [2024-07-15 15:36:02.409383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.848 qpair failed and we were unable to recover it. 00:30:52.848 [2024-07-15 15:36:02.419312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.848 [2024-07-15 15:36:02.419368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.848 [2024-07-15 15:36:02.419382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.848 [2024-07-15 15:36:02.419389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.848 [2024-07-15 15:36:02.419395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.848 [2024-07-15 15:36:02.419408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.848 qpair failed and we were unable to recover it. 00:30:52.848 [2024-07-15 15:36:02.429227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.848 [2024-07-15 15:36:02.429287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.848 [2024-07-15 15:36:02.429302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.848 [2024-07-15 15:36:02.429309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.848 [2024-07-15 15:36:02.429315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.848 [2024-07-15 15:36:02.429329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.848 qpair failed and we were unable to recover it. 00:30:52.848 [2024-07-15 15:36:02.439249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.848 [2024-07-15 15:36:02.439307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.848 [2024-07-15 15:36:02.439322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.849 [2024-07-15 15:36:02.439329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.849 [2024-07-15 15:36:02.439335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.849 [2024-07-15 15:36:02.439349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.849 qpair failed and we were unable to recover it. 00:30:52.849 [2024-07-15 15:36:02.449399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.849 [2024-07-15 15:36:02.449462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.849 [2024-07-15 15:36:02.449477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.849 [2024-07-15 15:36:02.449484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.849 [2024-07-15 15:36:02.449490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.849 [2024-07-15 15:36:02.449503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.849 qpair failed and we were unable to recover it. 00:30:52.849 [2024-07-15 15:36:02.459450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:52.849 [2024-07-15 15:36:02.459500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:52.849 [2024-07-15 15:36:02.459514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:52.849 [2024-07-15 15:36:02.459521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:52.849 [2024-07-15 15:36:02.459527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:52.849 [2024-07-15 15:36:02.459540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:52.849 qpair failed and we were unable to recover it. 00:30:53.113 [2024-07-15 15:36:02.469460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.113 [2024-07-15 15:36:02.469546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.113 [2024-07-15 15:36:02.469560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.113 [2024-07-15 15:36:02.469567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.113 [2024-07-15 15:36:02.469573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.113 [2024-07-15 15:36:02.469587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.113 qpair failed and we were unable to recover it. 00:30:53.113 [2024-07-15 15:36:02.479483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.113 [2024-07-15 15:36:02.479539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.113 [2024-07-15 15:36:02.479553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.113 [2024-07-15 15:36:02.479560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.113 [2024-07-15 15:36:02.479566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.114 [2024-07-15 15:36:02.479579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-07-15 15:36:02.489503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-07-15 15:36:02.489562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-07-15 15:36:02.489576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-07-15 15:36:02.489583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-07-15 15:36:02.489593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.114 [2024-07-15 15:36:02.489606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-07-15 15:36:02.499542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-07-15 15:36:02.499594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-07-15 15:36:02.499608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-07-15 15:36:02.499615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-07-15 15:36:02.499621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.114 [2024-07-15 15:36:02.499635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-07-15 15:36:02.509558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-07-15 15:36:02.509620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-07-15 15:36:02.509635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-07-15 15:36:02.509641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-07-15 15:36:02.509647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.114 [2024-07-15 15:36:02.509661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-07-15 15:36:02.519469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-07-15 15:36:02.519527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-07-15 15:36:02.519541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-07-15 15:36:02.519548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-07-15 15:36:02.519554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.114 [2024-07-15 15:36:02.519567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-07-15 15:36:02.529509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-07-15 15:36:02.529569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-07-15 15:36:02.529584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-07-15 15:36:02.529591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-07-15 15:36:02.529597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.114 [2024-07-15 15:36:02.529612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-07-15 15:36:02.539665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-07-15 15:36:02.539719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-07-15 15:36:02.539733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-07-15 15:36:02.539740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-07-15 15:36:02.539746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.114 [2024-07-15 15:36:02.539760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-07-15 15:36:02.549640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-07-15 15:36:02.549698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-07-15 15:36:02.549712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-07-15 15:36:02.549719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-07-15 15:36:02.549725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.114 [2024-07-15 15:36:02.549738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-07-15 15:36:02.559700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-07-15 15:36:02.559752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-07-15 15:36:02.559766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-07-15 15:36:02.559773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-07-15 15:36:02.559779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.114 [2024-07-15 15:36:02.559792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-07-15 15:36:02.569743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-07-15 15:36:02.569805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-07-15 15:36:02.569819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-07-15 15:36:02.569826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-07-15 15:36:02.569832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.114 [2024-07-15 15:36:02.569845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-07-15 15:36:02.579758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-07-15 15:36:02.579814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-07-15 15:36:02.579828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-07-15 15:36:02.579838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-07-15 15:36:02.579844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.114 [2024-07-15 15:36:02.579858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-07-15 15:36:02.589705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-07-15 15:36:02.589754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-07-15 15:36:02.589768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-07-15 15:36:02.589775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-07-15 15:36:02.589781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.114 [2024-07-15 15:36:02.589795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-07-15 15:36:02.599811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-07-15 15:36:02.599868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-07-15 15:36:02.599886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-07-15 15:36:02.599893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-07-15 15:36:02.599899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.114 [2024-07-15 15:36:02.599913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-07-15 15:36:02.609880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-07-15 15:36:02.609980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-07-15 15:36:02.609994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-07-15 15:36:02.610000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-07-15 15:36:02.610006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.114 [2024-07-15 15:36:02.610020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.114 qpair failed and we were unable to recover it. 00:30:53.114 [2024-07-15 15:36:02.619866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.114 [2024-07-15 15:36:02.619924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.114 [2024-07-15 15:36:02.619938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.114 [2024-07-15 15:36:02.619945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.114 [2024-07-15 15:36:02.619951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.115 [2024-07-15 15:36:02.619965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-07-15 15:36:02.629925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-07-15 15:36:02.629976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-07-15 15:36:02.629990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-07-15 15:36:02.629997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-07-15 15:36:02.630003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.115 [2024-07-15 15:36:02.630017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-07-15 15:36:02.639951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-07-15 15:36:02.640009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-07-15 15:36:02.640024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-07-15 15:36:02.640030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-07-15 15:36:02.640036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.115 [2024-07-15 15:36:02.640050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-07-15 15:36:02.649952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-07-15 15:36:02.650011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-07-15 15:36:02.650025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-07-15 15:36:02.650032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-07-15 15:36:02.650038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.115 [2024-07-15 15:36:02.650051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-07-15 15:36:02.660006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-07-15 15:36:02.660061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-07-15 15:36:02.660075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-07-15 15:36:02.660082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-07-15 15:36:02.660088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.115 [2024-07-15 15:36:02.660101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-07-15 15:36:02.670012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-07-15 15:36:02.670064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-07-15 15:36:02.670078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-07-15 15:36:02.670088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-07-15 15:36:02.670094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.115 [2024-07-15 15:36:02.670108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-07-15 15:36:02.680064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-07-15 15:36:02.680119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-07-15 15:36:02.680133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-07-15 15:36:02.680139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-07-15 15:36:02.680145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.115 [2024-07-15 15:36:02.680159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-07-15 15:36:02.690075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-07-15 15:36:02.690132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-07-15 15:36:02.690147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-07-15 15:36:02.690153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-07-15 15:36:02.690159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.115 [2024-07-15 15:36:02.690173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-07-15 15:36:02.700092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-07-15 15:36:02.700183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-07-15 15:36:02.700197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-07-15 15:36:02.700204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-07-15 15:36:02.700210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.115 [2024-07-15 15:36:02.700223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-07-15 15:36:02.710140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-07-15 15:36:02.710195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-07-15 15:36:02.710209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-07-15 15:36:02.710216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-07-15 15:36:02.710222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.115 [2024-07-15 15:36:02.710236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-07-15 15:36:02.720219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-07-15 15:36:02.720290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-07-15 15:36:02.720304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-07-15 15:36:02.720311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-07-15 15:36:02.720317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.115 [2024-07-15 15:36:02.720330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.115 [2024-07-15 15:36:02.730206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.115 [2024-07-15 15:36:02.730264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.115 [2024-07-15 15:36:02.730278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.115 [2024-07-15 15:36:02.730286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.115 [2024-07-15 15:36:02.730292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.115 [2024-07-15 15:36:02.730305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.115 qpair failed and we were unable to recover it. 00:30:53.380 [2024-07-15 15:36:02.740158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-07-15 15:36:02.740210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-07-15 15:36:02.740225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-07-15 15:36:02.740232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-07-15 15:36:02.740238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.380 [2024-07-15 15:36:02.740251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-07-15 15:36:02.750245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-07-15 15:36:02.750297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-07-15 15:36:02.750312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-07-15 15:36:02.750318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-07-15 15:36:02.750324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.380 [2024-07-15 15:36:02.750338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-07-15 15:36:02.760285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-07-15 15:36:02.760337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-07-15 15:36:02.760354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-07-15 15:36:02.760361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-07-15 15:36:02.760367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.380 [2024-07-15 15:36:02.760380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-07-15 15:36:02.770328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-07-15 15:36:02.770384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-07-15 15:36:02.770398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-07-15 15:36:02.770405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-07-15 15:36:02.770411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.380 [2024-07-15 15:36:02.770425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-07-15 15:36:02.780394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-07-15 15:36:02.780461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-07-15 15:36:02.780475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-07-15 15:36:02.780482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-07-15 15:36:02.780488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.380 [2024-07-15 15:36:02.780501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-07-15 15:36:02.790344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-07-15 15:36:02.790395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-07-15 15:36:02.790410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-07-15 15:36:02.790417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-07-15 15:36:02.790422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.380 [2024-07-15 15:36:02.790436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-07-15 15:36:02.800383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-07-15 15:36:02.800434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-07-15 15:36:02.800448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-07-15 15:36:02.800455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-07-15 15:36:02.800461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.380 [2024-07-15 15:36:02.800479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-07-15 15:36:02.810419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-07-15 15:36:02.810478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-07-15 15:36:02.810493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-07-15 15:36:02.810500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-07-15 15:36:02.810506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.380 [2024-07-15 15:36:02.810519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-07-15 15:36:02.820440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-07-15 15:36:02.820501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-07-15 15:36:02.820515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-07-15 15:36:02.820521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-07-15 15:36:02.820527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.380 [2024-07-15 15:36:02.820541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-07-15 15:36:02.830481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-07-15 15:36:02.830534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-07-15 15:36:02.830549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-07-15 15:36:02.830555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-07-15 15:36:02.830561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.380 [2024-07-15 15:36:02.830575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-07-15 15:36:02.840510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-07-15 15:36:02.840565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-07-15 15:36:02.840579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-07-15 15:36:02.840585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-07-15 15:36:02.840591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.380 [2024-07-15 15:36:02.840604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-07-15 15:36:02.850541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-07-15 15:36:02.850596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-07-15 15:36:02.850615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-07-15 15:36:02.850621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-07-15 15:36:02.850627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.380 [2024-07-15 15:36:02.850641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.380 qpair failed and we were unable to recover it. 00:30:53.380 [2024-07-15 15:36:02.860560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.380 [2024-07-15 15:36:02.860623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.380 [2024-07-15 15:36:02.860648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.380 [2024-07-15 15:36:02.860656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.380 [2024-07-15 15:36:02.860663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.381 [2024-07-15 15:36:02.860681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-07-15 15:36:02.870473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-07-15 15:36:02.870571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-07-15 15:36:02.870587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-07-15 15:36:02.870594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-07-15 15:36:02.870601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.381 [2024-07-15 15:36:02.870615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-07-15 15:36:02.880506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-07-15 15:36:02.880559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-07-15 15:36:02.880575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-07-15 15:36:02.880581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-07-15 15:36:02.880588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.381 [2024-07-15 15:36:02.880608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-07-15 15:36:02.890527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-07-15 15:36:02.890594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-07-15 15:36:02.890609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-07-15 15:36:02.890615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-07-15 15:36:02.890626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.381 [2024-07-15 15:36:02.890641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-07-15 15:36:02.900725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-07-15 15:36:02.900809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-07-15 15:36:02.900833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-07-15 15:36:02.900841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-07-15 15:36:02.900848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.381 [2024-07-15 15:36:02.900867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-07-15 15:36:02.910585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-07-15 15:36:02.910642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-07-15 15:36:02.910658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-07-15 15:36:02.910664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-07-15 15:36:02.910670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.381 [2024-07-15 15:36:02.910685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-07-15 15:36:02.920719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-07-15 15:36:02.920774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-07-15 15:36:02.920788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-07-15 15:36:02.920795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-07-15 15:36:02.920801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.381 [2024-07-15 15:36:02.920815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-07-15 15:36:02.930731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-07-15 15:36:02.930809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-07-15 15:36:02.930826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-07-15 15:36:02.930833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-07-15 15:36:02.930840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.381 [2024-07-15 15:36:02.930857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-07-15 15:36:02.940779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-07-15 15:36:02.940832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-07-15 15:36:02.940848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-07-15 15:36:02.940855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-07-15 15:36:02.940861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.381 [2024-07-15 15:36:02.940875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-07-15 15:36:02.950828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-07-15 15:36:02.950886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-07-15 15:36:02.950901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-07-15 15:36:02.950908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-07-15 15:36:02.950914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.381 [2024-07-15 15:36:02.950928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-07-15 15:36:02.960846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-07-15 15:36:02.960901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-07-15 15:36:02.960915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-07-15 15:36:02.960922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-07-15 15:36:02.960928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.381 [2024-07-15 15:36:02.960942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-07-15 15:36:02.970763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-07-15 15:36:02.970819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-07-15 15:36:02.970834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-07-15 15:36:02.970841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-07-15 15:36:02.970847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.381 [2024-07-15 15:36:02.970861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-07-15 15:36:02.980907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-07-15 15:36:02.980957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-07-15 15:36:02.980971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-07-15 15:36:02.980982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-07-15 15:36:02.980988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.381 [2024-07-15 15:36:02.981002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.381 [2024-07-15 15:36:02.990978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.381 [2024-07-15 15:36:02.991029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.381 [2024-07-15 15:36:02.991043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.381 [2024-07-15 15:36:02.991050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.381 [2024-07-15 15:36:02.991055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.381 [2024-07-15 15:36:02.991070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.381 qpair failed and we were unable to recover it. 00:30:53.645 [2024-07-15 15:36:03.000991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-07-15 15:36:03.001085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-07-15 15:36:03.001100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-07-15 15:36:03.001107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-07-15 15:36:03.001113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.645 [2024-07-15 15:36:03.001127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-07-15 15:36:03.010980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-07-15 15:36:03.011038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-07-15 15:36:03.011052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-07-15 15:36:03.011059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-07-15 15:36:03.011065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.645 [2024-07-15 15:36:03.011078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-07-15 15:36:03.021023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-07-15 15:36:03.021076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-07-15 15:36:03.021091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-07-15 15:36:03.021098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-07-15 15:36:03.021104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.645 [2024-07-15 15:36:03.021117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-07-15 15:36:03.031034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.645 [2024-07-15 15:36:03.031089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.645 [2024-07-15 15:36:03.031103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.645 [2024-07-15 15:36:03.031110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.645 [2024-07-15 15:36:03.031116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.645 [2024-07-15 15:36:03.031130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.645 qpair failed and we were unable to recover it. 00:30:53.645 [2024-07-15 15:36:03.041070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-07-15 15:36:03.041125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-07-15 15:36:03.041139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-07-15 15:36:03.041146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-07-15 15:36:03.041152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.646 [2024-07-15 15:36:03.041166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-07-15 15:36:03.051147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-07-15 15:36:03.051210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-07-15 15:36:03.051224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-07-15 15:36:03.051231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-07-15 15:36:03.051237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.646 [2024-07-15 15:36:03.051251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-07-15 15:36:03.060999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-07-15 15:36:03.061052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-07-15 15:36:03.061066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-07-15 15:36:03.061073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-07-15 15:36:03.061079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.646 [2024-07-15 15:36:03.061092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-07-15 15:36:03.071018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-07-15 15:36:03.071076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-07-15 15:36:03.071091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-07-15 15:36:03.071101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-07-15 15:36:03.071107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.646 [2024-07-15 15:36:03.071121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-07-15 15:36:03.081190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-07-15 15:36:03.081244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-07-15 15:36:03.081259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-07-15 15:36:03.081266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-07-15 15:36:03.081272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.646 [2024-07-15 15:36:03.081292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-07-15 15:36:03.091216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-07-15 15:36:03.091274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-07-15 15:36:03.091288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-07-15 15:36:03.091295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-07-15 15:36:03.091301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.646 [2024-07-15 15:36:03.091315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-07-15 15:36:03.101122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-07-15 15:36:03.101185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-07-15 15:36:03.101200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-07-15 15:36:03.101206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-07-15 15:36:03.101212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.646 [2024-07-15 15:36:03.101226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-07-15 15:36:03.111160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-07-15 15:36:03.111214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-07-15 15:36:03.111229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-07-15 15:36:03.111236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-07-15 15:36:03.111242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.646 [2024-07-15 15:36:03.111259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-07-15 15:36:03.121164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-07-15 15:36:03.121234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-07-15 15:36:03.121249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-07-15 15:36:03.121256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-07-15 15:36:03.121262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.646 [2024-07-15 15:36:03.121276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-07-15 15:36:03.131318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-07-15 15:36:03.131381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-07-15 15:36:03.131395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-07-15 15:36:03.131402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-07-15 15:36:03.131408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.646 [2024-07-15 15:36:03.131421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-07-15 15:36:03.141352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-07-15 15:36:03.141404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-07-15 15:36:03.141418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-07-15 15:36:03.141425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-07-15 15:36:03.141431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.646 [2024-07-15 15:36:03.141445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-07-15 15:36:03.151342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-07-15 15:36:03.151397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-07-15 15:36:03.151411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-07-15 15:36:03.151418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-07-15 15:36:03.151424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.646 [2024-07-15 15:36:03.151438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-07-15 15:36:03.161404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-07-15 15:36:03.161459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-07-15 15:36:03.161477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-07-15 15:36:03.161484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-07-15 15:36:03.161490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.646 [2024-07-15 15:36:03.161503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-07-15 15:36:03.171422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-07-15 15:36:03.171480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-07-15 15:36:03.171495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.646 [2024-07-15 15:36:03.171502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.646 [2024-07-15 15:36:03.171507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.646 [2024-07-15 15:36:03.171521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.646 qpair failed and we were unable to recover it. 00:30:53.646 [2024-07-15 15:36:03.181458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.646 [2024-07-15 15:36:03.181517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.646 [2024-07-15 15:36:03.181532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.647 [2024-07-15 15:36:03.181538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.647 [2024-07-15 15:36:03.181544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.647 [2024-07-15 15:36:03.181558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-07-15 15:36:03.191474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.647 [2024-07-15 15:36:03.191530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.647 [2024-07-15 15:36:03.191545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.647 [2024-07-15 15:36:03.191551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.647 [2024-07-15 15:36:03.191557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.647 [2024-07-15 15:36:03.191571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-07-15 15:36:03.201497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.647 [2024-07-15 15:36:03.201554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.647 [2024-07-15 15:36:03.201569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.647 [2024-07-15 15:36:03.201575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.647 [2024-07-15 15:36:03.201581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.647 [2024-07-15 15:36:03.201598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-07-15 15:36:03.211550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.647 [2024-07-15 15:36:03.211609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.647 [2024-07-15 15:36:03.211624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.647 [2024-07-15 15:36:03.211631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.647 [2024-07-15 15:36:03.211637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.647 [2024-07-15 15:36:03.211650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-07-15 15:36:03.221549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.647 [2024-07-15 15:36:03.221603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.647 [2024-07-15 15:36:03.221617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.647 [2024-07-15 15:36:03.221623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.647 [2024-07-15 15:36:03.221629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.647 [2024-07-15 15:36:03.221643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-07-15 15:36:03.231581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.647 [2024-07-15 15:36:03.231633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.647 [2024-07-15 15:36:03.231647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.647 [2024-07-15 15:36:03.231654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.647 [2024-07-15 15:36:03.231660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.647 [2024-07-15 15:36:03.231673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-07-15 15:36:03.241483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.647 [2024-07-15 15:36:03.241541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.647 [2024-07-15 15:36:03.241555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.647 [2024-07-15 15:36:03.241562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.647 [2024-07-15 15:36:03.241568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.647 [2024-07-15 15:36:03.241581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-07-15 15:36:03.251635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.647 [2024-07-15 15:36:03.251727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.647 [2024-07-15 15:36:03.251745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.647 [2024-07-15 15:36:03.251751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.647 [2024-07-15 15:36:03.251757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.647 [2024-07-15 15:36:03.251771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.647 [2024-07-15 15:36:03.261671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.647 [2024-07-15 15:36:03.261726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.647 [2024-07-15 15:36:03.261740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.647 [2024-07-15 15:36:03.261747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.647 [2024-07-15 15:36:03.261753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.647 [2024-07-15 15:36:03.261767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.647 qpair failed and we were unable to recover it. 00:30:53.911 [2024-07-15 15:36:03.271687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.911 [2024-07-15 15:36:03.271738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.911 [2024-07-15 15:36:03.271753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.911 [2024-07-15 15:36:03.271759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.911 [2024-07-15 15:36:03.271765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.911 [2024-07-15 15:36:03.271779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.911 qpair failed and we were unable to recover it. 00:30:53.911 [2024-07-15 15:36:03.281608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.911 [2024-07-15 15:36:03.281673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.911 [2024-07-15 15:36:03.281689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.911 [2024-07-15 15:36:03.281696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.911 [2024-07-15 15:36:03.281702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.911 [2024-07-15 15:36:03.281716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.911 qpair failed and we were unable to recover it. 00:30:53.911 [2024-07-15 15:36:03.291732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.911 [2024-07-15 15:36:03.291829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.911 [2024-07-15 15:36:03.291844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.911 [2024-07-15 15:36:03.291851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.911 [2024-07-15 15:36:03.291860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.911 [2024-07-15 15:36:03.291875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.911 qpair failed and we were unable to recover it. 00:30:53.911 [2024-07-15 15:36:03.301747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.911 [2024-07-15 15:36:03.301807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.911 [2024-07-15 15:36:03.301821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.911 [2024-07-15 15:36:03.301828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.911 [2024-07-15 15:36:03.301834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.911 [2024-07-15 15:36:03.301847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.911 qpair failed and we were unable to recover it. 00:30:53.911 [2024-07-15 15:36:03.311800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.911 [2024-07-15 15:36:03.311851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.911 [2024-07-15 15:36:03.311865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.911 [2024-07-15 15:36:03.311873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.911 [2024-07-15 15:36:03.311879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.911 [2024-07-15 15:36:03.311895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.911 qpair failed and we were unable to recover it. 00:30:53.911 [2024-07-15 15:36:03.321846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.911 [2024-07-15 15:36:03.321905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.911 [2024-07-15 15:36:03.321921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.911 [2024-07-15 15:36:03.321928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.911 [2024-07-15 15:36:03.321938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.911 [2024-07-15 15:36:03.321953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.911 qpair failed and we were unable to recover it. 00:30:53.911 [2024-07-15 15:36:03.331856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.911 [2024-07-15 15:36:03.331955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.911 [2024-07-15 15:36:03.331969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.911 [2024-07-15 15:36:03.331976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.911 [2024-07-15 15:36:03.331982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.911 [2024-07-15 15:36:03.331996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.911 qpair failed and we were unable to recover it. 00:30:53.911 [2024-07-15 15:36:03.341900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.911 [2024-07-15 15:36:03.341956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.911 [2024-07-15 15:36:03.341970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.911 [2024-07-15 15:36:03.341977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.911 [2024-07-15 15:36:03.341983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.911 [2024-07-15 15:36:03.341997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.911 qpair failed and we were unable to recover it. 00:30:53.911 [2024-07-15 15:36:03.351928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.911 [2024-07-15 15:36:03.352005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.911 [2024-07-15 15:36:03.352020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.911 [2024-07-15 15:36:03.352026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.911 [2024-07-15 15:36:03.352032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.912 [2024-07-15 15:36:03.352046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.912 qpair failed and we were unable to recover it. 00:30:53.912 [2024-07-15 15:36:03.362006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.912 [2024-07-15 15:36:03.362057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.912 [2024-07-15 15:36:03.362071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.912 [2024-07-15 15:36:03.362078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.912 [2024-07-15 15:36:03.362084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.912 [2024-07-15 15:36:03.362098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.912 qpair failed and we were unable to recover it. 00:30:53.912 [2024-07-15 15:36:03.371965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.912 [2024-07-15 15:36:03.372023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.912 [2024-07-15 15:36:03.372037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.912 [2024-07-15 15:36:03.372044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.912 [2024-07-15 15:36:03.372050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.912 [2024-07-15 15:36:03.372063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.912 qpair failed and we were unable to recover it. 00:30:53.912 [2024-07-15 15:36:03.381981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.912 [2024-07-15 15:36:03.382034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.912 [2024-07-15 15:36:03.382048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.912 [2024-07-15 15:36:03.382054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.912 [2024-07-15 15:36:03.382064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.912 [2024-07-15 15:36:03.382078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.912 qpair failed and we were unable to recover it. 00:30:53.912 [2024-07-15 15:36:03.391913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.912 [2024-07-15 15:36:03.391972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.912 [2024-07-15 15:36:03.391986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.912 [2024-07-15 15:36:03.391994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.912 [2024-07-15 15:36:03.392000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.912 [2024-07-15 15:36:03.392014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.912 qpair failed and we were unable to recover it. 00:30:53.912 [2024-07-15 15:36:03.402079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.912 [2024-07-15 15:36:03.402135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.912 [2024-07-15 15:36:03.402150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.912 [2024-07-15 15:36:03.402156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.912 [2024-07-15 15:36:03.402162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.912 [2024-07-15 15:36:03.402176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.912 qpair failed and we were unable to recover it. 00:30:53.912 [2024-07-15 15:36:03.412117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.912 [2024-07-15 15:36:03.412179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.912 [2024-07-15 15:36:03.412194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.912 [2024-07-15 15:36:03.412201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.912 [2024-07-15 15:36:03.412207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.912 [2024-07-15 15:36:03.412220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.912 qpair failed and we were unable to recover it. 00:30:53.912 [2024-07-15 15:36:03.422116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.912 [2024-07-15 15:36:03.422168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.912 [2024-07-15 15:36:03.422183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.912 [2024-07-15 15:36:03.422189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.912 [2024-07-15 15:36:03.422195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.912 [2024-07-15 15:36:03.422209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.912 qpair failed and we were unable to recover it. 00:30:53.912 [2024-07-15 15:36:03.432189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.912 [2024-07-15 15:36:03.432275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.912 [2024-07-15 15:36:03.432289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.912 [2024-07-15 15:36:03.432296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.912 [2024-07-15 15:36:03.432302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.912 [2024-07-15 15:36:03.432316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.912 qpair failed and we were unable to recover it. 00:30:53.912 [2024-07-15 15:36:03.442205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.912 [2024-07-15 15:36:03.442271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.912 [2024-07-15 15:36:03.442285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.912 [2024-07-15 15:36:03.442292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.912 [2024-07-15 15:36:03.442298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.912 [2024-07-15 15:36:03.442311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.912 qpair failed and we were unable to recover it. 00:30:53.912 [2024-07-15 15:36:03.452195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.912 [2024-07-15 15:36:03.452252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.912 [2024-07-15 15:36:03.452267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.912 [2024-07-15 15:36:03.452274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.912 [2024-07-15 15:36:03.452280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.912 [2024-07-15 15:36:03.452294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.912 qpair failed and we were unable to recover it. 00:30:53.912 [2024-07-15 15:36:03.462221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.912 [2024-07-15 15:36:03.462315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.912 [2024-07-15 15:36:03.462329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.912 [2024-07-15 15:36:03.462336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.912 [2024-07-15 15:36:03.462342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.912 [2024-07-15 15:36:03.462355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.912 qpair failed and we were unable to recover it. 00:30:53.912 [2024-07-15 15:36:03.472254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.912 [2024-07-15 15:36:03.472333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.912 [2024-07-15 15:36:03.472348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.912 [2024-07-15 15:36:03.472361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.912 [2024-07-15 15:36:03.472367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.912 [2024-07-15 15:36:03.472381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.912 qpair failed and we were unable to recover it. 00:30:53.912 [2024-07-15 15:36:03.482330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.912 [2024-07-15 15:36:03.482383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.912 [2024-07-15 15:36:03.482397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.912 [2024-07-15 15:36:03.482404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.912 [2024-07-15 15:36:03.482410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.912 [2024-07-15 15:36:03.482423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.912 qpair failed and we were unable to recover it. 00:30:53.912 [2024-07-15 15:36:03.492362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.912 [2024-07-15 15:36:03.492423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.912 [2024-07-15 15:36:03.492438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.912 [2024-07-15 15:36:03.492445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.912 [2024-07-15 15:36:03.492453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.913 [2024-07-15 15:36:03.492468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.913 qpair failed and we were unable to recover it. 00:30:53.913 [2024-07-15 15:36:03.502338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.913 [2024-07-15 15:36:03.502392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.913 [2024-07-15 15:36:03.502406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.913 [2024-07-15 15:36:03.502413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.913 [2024-07-15 15:36:03.502419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.913 [2024-07-15 15:36:03.502432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.913 qpair failed and we were unable to recover it. 00:30:53.913 [2024-07-15 15:36:03.512361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.913 [2024-07-15 15:36:03.512413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.913 [2024-07-15 15:36:03.512428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.913 [2024-07-15 15:36:03.512434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.913 [2024-07-15 15:36:03.512440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.913 [2024-07-15 15:36:03.512454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.913 qpair failed and we were unable to recover it. 00:30:53.913 [2024-07-15 15:36:03.522286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:53.913 [2024-07-15 15:36:03.522340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:53.913 [2024-07-15 15:36:03.522355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:53.913 [2024-07-15 15:36:03.522361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:53.913 [2024-07-15 15:36:03.522367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:53.913 [2024-07-15 15:36:03.522381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:53.913 qpair failed and we were unable to recover it. 00:30:54.175 [2024-07-15 15:36:03.532422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.175 [2024-07-15 15:36:03.532512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.175 [2024-07-15 15:36:03.532527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.176 [2024-07-15 15:36:03.532534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.176 [2024-07-15 15:36:03.532540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.176 [2024-07-15 15:36:03.532553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.176 qpair failed and we were unable to recover it. 00:30:54.176 [2024-07-15 15:36:03.542455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.176 [2024-07-15 15:36:03.542541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.176 [2024-07-15 15:36:03.542555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.176 [2024-07-15 15:36:03.542562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.176 [2024-07-15 15:36:03.542568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.176 [2024-07-15 15:36:03.542582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.176 qpair failed and we were unable to recover it. 00:30:54.176 [2024-07-15 15:36:03.552470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.176 [2024-07-15 15:36:03.552532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.176 [2024-07-15 15:36:03.552546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.176 [2024-07-15 15:36:03.552553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.176 [2024-07-15 15:36:03.552559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.176 [2024-07-15 15:36:03.552573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.176 qpair failed and we were unable to recover it. 00:30:54.176 [2024-07-15 15:36:03.562395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.176 [2024-07-15 15:36:03.562451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.176 [2024-07-15 15:36:03.562469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.176 [2024-07-15 15:36:03.562476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.176 [2024-07-15 15:36:03.562482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.176 [2024-07-15 15:36:03.562495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.176 qpair failed and we were unable to recover it. 00:30:54.176 [2024-07-15 15:36:03.572492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.176 [2024-07-15 15:36:03.572549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.176 [2024-07-15 15:36:03.572563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.176 [2024-07-15 15:36:03.572570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.176 [2024-07-15 15:36:03.572576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.176 [2024-07-15 15:36:03.572589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.176 qpair failed and we were unable to recover it. 00:30:54.176 [2024-07-15 15:36:03.582562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.176 [2024-07-15 15:36:03.582659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.176 [2024-07-15 15:36:03.582673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.176 [2024-07-15 15:36:03.582680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.176 [2024-07-15 15:36:03.582687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.176 [2024-07-15 15:36:03.582700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.176 qpair failed and we were unable to recover it. 00:30:54.176 [2024-07-15 15:36:03.592586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.176 [2024-07-15 15:36:03.592639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.176 [2024-07-15 15:36:03.592653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.176 [2024-07-15 15:36:03.592660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.176 [2024-07-15 15:36:03.592666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.176 [2024-07-15 15:36:03.592679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.176 qpair failed and we were unable to recover it. 00:30:54.176 [2024-07-15 15:36:03.602493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.176 [2024-07-15 15:36:03.602551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.176 [2024-07-15 15:36:03.602567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.176 [2024-07-15 15:36:03.602573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.176 [2024-07-15 15:36:03.602579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.176 [2024-07-15 15:36:03.602601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.176 qpair failed and we were unable to recover it. 00:30:54.176 [2024-07-15 15:36:03.612638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.176 [2024-07-15 15:36:03.612696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.176 [2024-07-15 15:36:03.612711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.176 [2024-07-15 15:36:03.612717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.176 [2024-07-15 15:36:03.612723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.176 [2024-07-15 15:36:03.612737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.176 qpair failed and we were unable to recover it. 00:30:54.176 [2024-07-15 15:36:03.622661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.176 [2024-07-15 15:36:03.622715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.176 [2024-07-15 15:36:03.622729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.176 [2024-07-15 15:36:03.622736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.176 [2024-07-15 15:36:03.622743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.176 [2024-07-15 15:36:03.622756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.176 qpair failed and we were unable to recover it. 00:30:54.176 [2024-07-15 15:36:03.632690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.176 [2024-07-15 15:36:03.632773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.176 [2024-07-15 15:36:03.632787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.176 [2024-07-15 15:36:03.632794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.176 [2024-07-15 15:36:03.632800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.176 [2024-07-15 15:36:03.632813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.176 qpair failed and we were unable to recover it. 00:30:54.176 [2024-07-15 15:36:03.642599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.176 [2024-07-15 15:36:03.642666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.176 [2024-07-15 15:36:03.642680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.176 [2024-07-15 15:36:03.642687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.176 [2024-07-15 15:36:03.642693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.176 [2024-07-15 15:36:03.642706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.176 qpair failed and we were unable to recover it. 00:30:54.176 [2024-07-15 15:36:03.652743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.176 [2024-07-15 15:36:03.652802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.176 [2024-07-15 15:36:03.652821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.176 [2024-07-15 15:36:03.652828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.176 [2024-07-15 15:36:03.652836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.176 [2024-07-15 15:36:03.652851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.176 qpair failed and we were unable to recover it. 00:30:54.176 [2024-07-15 15:36:03.662773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.176 [2024-07-15 15:36:03.662825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.176 [2024-07-15 15:36:03.662840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.176 [2024-07-15 15:36:03.662847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.176 [2024-07-15 15:36:03.662853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.176 [2024-07-15 15:36:03.662867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.176 qpair failed and we were unable to recover it. 00:30:54.176 [2024-07-15 15:36:03.672690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.176 [2024-07-15 15:36:03.672742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.176 [2024-07-15 15:36:03.672757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.177 [2024-07-15 15:36:03.672764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.177 [2024-07-15 15:36:03.672770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.177 [2024-07-15 15:36:03.672783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.177 qpair failed and we were unable to recover it. 00:30:54.177 [2024-07-15 15:36:03.682830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.177 [2024-07-15 15:36:03.682887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.177 [2024-07-15 15:36:03.682902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.177 [2024-07-15 15:36:03.682909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.177 [2024-07-15 15:36:03.682916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.177 [2024-07-15 15:36:03.682930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.177 qpair failed and we were unable to recover it. 00:30:54.177 [2024-07-15 15:36:03.692873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.177 [2024-07-15 15:36:03.692938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.177 [2024-07-15 15:36:03.692953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.177 [2024-07-15 15:36:03.692960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.177 [2024-07-15 15:36:03.692970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.177 [2024-07-15 15:36:03.692984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.177 qpair failed and we were unable to recover it. 00:30:54.177 [2024-07-15 15:36:03.702823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.177 [2024-07-15 15:36:03.702879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.177 [2024-07-15 15:36:03.702899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.177 [2024-07-15 15:36:03.702906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.177 [2024-07-15 15:36:03.702912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.177 [2024-07-15 15:36:03.702926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.177 qpair failed and we were unable to recover it. 00:30:54.177 [2024-07-15 15:36:03.712920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.177 [2024-07-15 15:36:03.712973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.177 [2024-07-15 15:36:03.712987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.177 [2024-07-15 15:36:03.712994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.177 [2024-07-15 15:36:03.713000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.177 [2024-07-15 15:36:03.713014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.177 qpair failed and we were unable to recover it. 00:30:54.177 [2024-07-15 15:36:03.722964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.177 [2024-07-15 15:36:03.723019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.177 [2024-07-15 15:36:03.723034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.177 [2024-07-15 15:36:03.723040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.177 [2024-07-15 15:36:03.723046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.177 [2024-07-15 15:36:03.723060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.177 qpair failed and we were unable to recover it. 00:30:54.177 [2024-07-15 15:36:03.732967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.177 [2024-07-15 15:36:03.733023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.177 [2024-07-15 15:36:03.733037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.177 [2024-07-15 15:36:03.733044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.177 [2024-07-15 15:36:03.733050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.177 [2024-07-15 15:36:03.733064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.177 qpair failed and we were unable to recover it. 00:30:54.177 [2024-07-15 15:36:03.742969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.177 [2024-07-15 15:36:03.743025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.177 [2024-07-15 15:36:03.743040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.177 [2024-07-15 15:36:03.743046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.177 [2024-07-15 15:36:03.743052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.177 [2024-07-15 15:36:03.743066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.177 qpair failed and we were unable to recover it. 00:30:54.177 [2024-07-15 15:36:03.753006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.177 [2024-07-15 15:36:03.753059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.177 [2024-07-15 15:36:03.753073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.177 [2024-07-15 15:36:03.753080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.177 [2024-07-15 15:36:03.753086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.177 [2024-07-15 15:36:03.753099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.177 qpair failed and we were unable to recover it. 00:30:54.177 [2024-07-15 15:36:03.763039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.177 [2024-07-15 15:36:03.763096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.177 [2024-07-15 15:36:03.763111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.177 [2024-07-15 15:36:03.763117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.177 [2024-07-15 15:36:03.763123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.177 [2024-07-15 15:36:03.763137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.177 qpair failed and we were unable to recover it. 00:30:54.177 [2024-07-15 15:36:03.773069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.177 [2024-07-15 15:36:03.773136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.177 [2024-07-15 15:36:03.773150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.177 [2024-07-15 15:36:03.773157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.177 [2024-07-15 15:36:03.773163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.177 [2024-07-15 15:36:03.773176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.177 qpair failed and we were unable to recover it. 00:30:54.177 [2024-07-15 15:36:03.783097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.177 [2024-07-15 15:36:03.783186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.177 [2024-07-15 15:36:03.783201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.177 [2024-07-15 15:36:03.783207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.177 [2024-07-15 15:36:03.783217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.177 [2024-07-15 15:36:03.783231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.177 qpair failed and we were unable to recover it. 00:30:54.177 [2024-07-15 15:36:03.793129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.177 [2024-07-15 15:36:03.793182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.177 [2024-07-15 15:36:03.793196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.177 [2024-07-15 15:36:03.793203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.177 [2024-07-15 15:36:03.793209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.177 [2024-07-15 15:36:03.793222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.177 qpair failed and we were unable to recover it. 00:30:54.441 [2024-07-15 15:36:03.803185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.441 [2024-07-15 15:36:03.803284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.441 [2024-07-15 15:36:03.803299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.441 [2024-07-15 15:36:03.803306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.441 [2024-07-15 15:36:03.803312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.441 [2024-07-15 15:36:03.803326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.441 qpair failed and we were unable to recover it. 00:30:54.441 [2024-07-15 15:36:03.813191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.441 [2024-07-15 15:36:03.813251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.441 [2024-07-15 15:36:03.813266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.441 [2024-07-15 15:36:03.813273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.441 [2024-07-15 15:36:03.813279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.441 [2024-07-15 15:36:03.813293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.441 qpair failed and we were unable to recover it. 00:30:54.441 [2024-07-15 15:36:03.823202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.441 [2024-07-15 15:36:03.823255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.441 [2024-07-15 15:36:03.823269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.441 [2024-07-15 15:36:03.823276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.441 [2024-07-15 15:36:03.823282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.441 [2024-07-15 15:36:03.823296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.441 qpair failed and we were unable to recover it. 00:30:54.441 [2024-07-15 15:36:03.833258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.441 [2024-07-15 15:36:03.833312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.441 [2024-07-15 15:36:03.833327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.441 [2024-07-15 15:36:03.833333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.441 [2024-07-15 15:36:03.833340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.441 [2024-07-15 15:36:03.833354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.441 qpair failed and we were unable to recover it. 00:30:54.441 [2024-07-15 15:36:03.843301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.441 [2024-07-15 15:36:03.843357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.441 [2024-07-15 15:36:03.843371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.441 [2024-07-15 15:36:03.843377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.441 [2024-07-15 15:36:03.843384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.441 [2024-07-15 15:36:03.843397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.441 qpair failed and we were unable to recover it. 00:30:54.441 [2024-07-15 15:36:03.853297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.441 [2024-07-15 15:36:03.853363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.441 [2024-07-15 15:36:03.853377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.441 [2024-07-15 15:36:03.853384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.441 [2024-07-15 15:36:03.853390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.441 [2024-07-15 15:36:03.853404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.441 qpair failed and we were unable to recover it. 00:30:54.441 [2024-07-15 15:36:03.863275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.441 [2024-07-15 15:36:03.863337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.441 [2024-07-15 15:36:03.863352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.441 [2024-07-15 15:36:03.863358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.441 [2024-07-15 15:36:03.863364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.441 [2024-07-15 15:36:03.863377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.441 qpair failed and we were unable to recover it. 00:30:54.441 [2024-07-15 15:36:03.873345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.441 [2024-07-15 15:36:03.873405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.441 [2024-07-15 15:36:03.873420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.441 [2024-07-15 15:36:03.873430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.441 [2024-07-15 15:36:03.873437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.441 [2024-07-15 15:36:03.873451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.441 qpair failed and we were unable to recover it. 00:30:54.441 [2024-07-15 15:36:03.883384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.441 [2024-07-15 15:36:03.883440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.441 [2024-07-15 15:36:03.883455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.441 [2024-07-15 15:36:03.883461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.441 [2024-07-15 15:36:03.883467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.441 [2024-07-15 15:36:03.883481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.441 qpair failed and we were unable to recover it. 00:30:54.441 [2024-07-15 15:36:03.893398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.441 [2024-07-15 15:36:03.893454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.441 [2024-07-15 15:36:03.893468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.441 [2024-07-15 15:36:03.893475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.441 [2024-07-15 15:36:03.893481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.441 [2024-07-15 15:36:03.893495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.441 qpair failed and we were unable to recover it. 00:30:54.441 [2024-07-15 15:36:03.903412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.441 [2024-07-15 15:36:03.903463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.441 [2024-07-15 15:36:03.903478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.441 [2024-07-15 15:36:03.903485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.441 [2024-07-15 15:36:03.903491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.441 [2024-07-15 15:36:03.903504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.441 qpair failed and we were unable to recover it. 00:30:54.441 [2024-07-15 15:36:03.913479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.441 [2024-07-15 15:36:03.913531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.441 [2024-07-15 15:36:03.913546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.441 [2024-07-15 15:36:03.913552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.441 [2024-07-15 15:36:03.913559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.441 [2024-07-15 15:36:03.913572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.441 qpair failed and we were unable to recover it. 00:30:54.441 [2024-07-15 15:36:03.923388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.441 [2024-07-15 15:36:03.923444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.441 [2024-07-15 15:36:03.923458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.441 [2024-07-15 15:36:03.923465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.441 [2024-07-15 15:36:03.923471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.441 [2024-07-15 15:36:03.923484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.441 qpair failed and we were unable to recover it. 00:30:54.442 [2024-07-15 15:36:03.933498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.442 [2024-07-15 15:36:03.933559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.442 [2024-07-15 15:36:03.933574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.442 [2024-07-15 15:36:03.933580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.442 [2024-07-15 15:36:03.933587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.442 [2024-07-15 15:36:03.933600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.442 qpair failed and we were unable to recover it. 00:30:54.442 [2024-07-15 15:36:03.943547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.442 [2024-07-15 15:36:03.943648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.442 [2024-07-15 15:36:03.943663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.442 [2024-07-15 15:36:03.943670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.442 [2024-07-15 15:36:03.943676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.442 [2024-07-15 15:36:03.943689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.442 qpair failed and we were unable to recover it. 00:30:54.442 [2024-07-15 15:36:03.953559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.442 [2024-07-15 15:36:03.953624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.442 [2024-07-15 15:36:03.953648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.442 [2024-07-15 15:36:03.953657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.442 [2024-07-15 15:36:03.953663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.442 [2024-07-15 15:36:03.953681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.442 qpair failed and we were unable to recover it. 00:30:54.442 [2024-07-15 15:36:03.963603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.442 [2024-07-15 15:36:03.963664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.442 [2024-07-15 15:36:03.963692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.442 [2024-07-15 15:36:03.963701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.442 [2024-07-15 15:36:03.963707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.442 [2024-07-15 15:36:03.963726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.442 qpair failed and we were unable to recover it. 00:30:54.442 [2024-07-15 15:36:03.973540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.442 [2024-07-15 15:36:03.973634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.442 [2024-07-15 15:36:03.973658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.442 [2024-07-15 15:36:03.973666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.442 [2024-07-15 15:36:03.973673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.442 [2024-07-15 15:36:03.973691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.442 qpair failed and we were unable to recover it. 00:30:54.442 [2024-07-15 15:36:03.983659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.442 [2024-07-15 15:36:03.983717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.442 [2024-07-15 15:36:03.983733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.442 [2024-07-15 15:36:03.983740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.442 [2024-07-15 15:36:03.983746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.442 [2024-07-15 15:36:03.983761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.442 qpair failed and we were unable to recover it. 00:30:54.442 [2024-07-15 15:36:03.993702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.442 [2024-07-15 15:36:03.993755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.442 [2024-07-15 15:36:03.993770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.442 [2024-07-15 15:36:03.993777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.442 [2024-07-15 15:36:03.993783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.442 [2024-07-15 15:36:03.993797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.442 qpair failed and we were unable to recover it. 00:30:54.442 [2024-07-15 15:36:04.003721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.442 [2024-07-15 15:36:04.003777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.442 [2024-07-15 15:36:04.003792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.442 [2024-07-15 15:36:04.003799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.442 [2024-07-15 15:36:04.003805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.442 [2024-07-15 15:36:04.003824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.442 qpair failed and we were unable to recover it. 00:30:54.442 [2024-07-15 15:36:04.013747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.442 [2024-07-15 15:36:04.013807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.442 [2024-07-15 15:36:04.013822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.442 [2024-07-15 15:36:04.013829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.442 [2024-07-15 15:36:04.013835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.442 [2024-07-15 15:36:04.013849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.442 qpair failed and we were unable to recover it. 00:30:54.442 [2024-07-15 15:36:04.023769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.442 [2024-07-15 15:36:04.023820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.442 [2024-07-15 15:36:04.023835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.442 [2024-07-15 15:36:04.023842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.442 [2024-07-15 15:36:04.023848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.442 [2024-07-15 15:36:04.023861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.442 qpair failed and we were unable to recover it. 00:30:54.442 [2024-07-15 15:36:04.033674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.442 [2024-07-15 15:36:04.033735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.442 [2024-07-15 15:36:04.033749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.442 [2024-07-15 15:36:04.033756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.442 [2024-07-15 15:36:04.033762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.442 [2024-07-15 15:36:04.033776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.442 qpair failed and we were unable to recover it. 00:30:54.442 [2024-07-15 15:36:04.043842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.442 [2024-07-15 15:36:04.043902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.442 [2024-07-15 15:36:04.043916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.442 [2024-07-15 15:36:04.043923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.442 [2024-07-15 15:36:04.043929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.442 [2024-07-15 15:36:04.043943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.442 qpair failed and we were unable to recover it. 00:30:54.442 [2024-07-15 15:36:04.053848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.442 [2024-07-15 15:36:04.053914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.442 [2024-07-15 15:36:04.053932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.442 [2024-07-15 15:36:04.053939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.442 [2024-07-15 15:36:04.053945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.442 [2024-07-15 15:36:04.053959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.442 qpair failed and we were unable to recover it. 00:30:54.705 [2024-07-15 15:36:04.063802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.705 [2024-07-15 15:36:04.063853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.705 [2024-07-15 15:36:04.063869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.705 [2024-07-15 15:36:04.063876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.705 [2024-07-15 15:36:04.063889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.705 [2024-07-15 15:36:04.063905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.705 qpair failed and we were unable to recover it. 00:30:54.705 [2024-07-15 15:36:04.073889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.705 [2024-07-15 15:36:04.073942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.705 [2024-07-15 15:36:04.073957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.705 [2024-07-15 15:36:04.073964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.705 [2024-07-15 15:36:04.073970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.705 [2024-07-15 15:36:04.073984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.705 qpair failed and we were unable to recover it. 00:30:54.705 [2024-07-15 15:36:04.083938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.705 [2024-07-15 15:36:04.083992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.705 [2024-07-15 15:36:04.084007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.705 [2024-07-15 15:36:04.084014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.705 [2024-07-15 15:36:04.084020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.705 [2024-07-15 15:36:04.084033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.705 qpair failed and we were unable to recover it. 00:30:54.705 [2024-07-15 15:36:04.093966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.705 [2024-07-15 15:36:04.094034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.705 [2024-07-15 15:36:04.094049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.705 [2024-07-15 15:36:04.094055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.705 [2024-07-15 15:36:04.094061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.705 [2024-07-15 15:36:04.094080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.705 qpair failed and we were unable to recover it. 00:30:54.705 [2024-07-15 15:36:04.103990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.705 [2024-07-15 15:36:04.104042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.705 [2024-07-15 15:36:04.104056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.705 [2024-07-15 15:36:04.104063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.705 [2024-07-15 15:36:04.104069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.705 [2024-07-15 15:36:04.104083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.705 qpair failed and we were unable to recover it. 00:30:54.705 [2024-07-15 15:36:04.114019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.705 [2024-07-15 15:36:04.114109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.705 [2024-07-15 15:36:04.114123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.705 [2024-07-15 15:36:04.114130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.705 [2024-07-15 15:36:04.114136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.705 [2024-07-15 15:36:04.114150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.705 qpair failed and we were unable to recover it. 00:30:54.705 [2024-07-15 15:36:04.124041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.705 [2024-07-15 15:36:04.124123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.705 [2024-07-15 15:36:04.124137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.705 [2024-07-15 15:36:04.124144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.705 [2024-07-15 15:36:04.124150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.705 [2024-07-15 15:36:04.124163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.705 qpair failed and we were unable to recover it. 00:30:54.705 [2024-07-15 15:36:04.134076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.705 [2024-07-15 15:36:04.134134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.705 [2024-07-15 15:36:04.134148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.705 [2024-07-15 15:36:04.134155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.705 [2024-07-15 15:36:04.134161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.705 [2024-07-15 15:36:04.134175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.705 qpair failed and we were unable to recover it. 00:30:54.705 [2024-07-15 15:36:04.144153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.705 [2024-07-15 15:36:04.144213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.705 [2024-07-15 15:36:04.144228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.705 [2024-07-15 15:36:04.144235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.705 [2024-07-15 15:36:04.144241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.705 [2024-07-15 15:36:04.144254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.705 qpair failed and we were unable to recover it. 00:30:54.705 [2024-07-15 15:36:04.154053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.705 [2024-07-15 15:36:04.154106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.705 [2024-07-15 15:36:04.154120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.705 [2024-07-15 15:36:04.154127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.705 [2024-07-15 15:36:04.154133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.705 [2024-07-15 15:36:04.154147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.705 qpair failed and we were unable to recover it. 00:30:54.705 [2024-07-15 15:36:04.164162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.706 [2024-07-15 15:36:04.164216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.706 [2024-07-15 15:36:04.164230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.706 [2024-07-15 15:36:04.164237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.706 [2024-07-15 15:36:04.164242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.706 [2024-07-15 15:36:04.164256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.706 qpair failed and we were unable to recover it. 00:30:54.706 [2024-07-15 15:36:04.174166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.706 [2024-07-15 15:36:04.174228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.706 [2024-07-15 15:36:04.174242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.706 [2024-07-15 15:36:04.174249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.706 [2024-07-15 15:36:04.174255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.706 [2024-07-15 15:36:04.174269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.706 qpair failed and we were unable to recover it. 00:30:54.706 [2024-07-15 15:36:04.184218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.706 [2024-07-15 15:36:04.184268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.706 [2024-07-15 15:36:04.184282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.706 [2024-07-15 15:36:04.184289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.706 [2024-07-15 15:36:04.184298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.706 [2024-07-15 15:36:04.184312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.706 qpair failed and we were unable to recover it. 00:30:54.706 [2024-07-15 15:36:04.194221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.706 [2024-07-15 15:36:04.194279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.706 [2024-07-15 15:36:04.194293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.706 [2024-07-15 15:36:04.194300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.706 [2024-07-15 15:36:04.194306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.706 [2024-07-15 15:36:04.194320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.706 qpair failed and we were unable to recover it. 00:30:54.706 [2024-07-15 15:36:04.204147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.706 [2024-07-15 15:36:04.204212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.706 [2024-07-15 15:36:04.204226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.706 [2024-07-15 15:36:04.204232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.706 [2024-07-15 15:36:04.204238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.706 [2024-07-15 15:36:04.204252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.706 qpair failed and we were unable to recover it. 00:30:54.706 [2024-07-15 15:36:04.214263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.706 [2024-07-15 15:36:04.214324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.706 [2024-07-15 15:36:04.214338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.706 [2024-07-15 15:36:04.214344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.706 [2024-07-15 15:36:04.214350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.706 [2024-07-15 15:36:04.214364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.706 qpair failed and we were unable to recover it. 00:30:54.706 [2024-07-15 15:36:04.224326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.706 [2024-07-15 15:36:04.224423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.706 [2024-07-15 15:36:04.224438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.706 [2024-07-15 15:36:04.224444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.706 [2024-07-15 15:36:04.224450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.706 [2024-07-15 15:36:04.224464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.706 qpair failed and we were unable to recover it. 00:30:54.706 [2024-07-15 15:36:04.234353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.706 [2024-07-15 15:36:04.234413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.706 [2024-07-15 15:36:04.234427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.706 [2024-07-15 15:36:04.234434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.706 [2024-07-15 15:36:04.234440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.706 [2024-07-15 15:36:04.234454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.706 qpair failed and we were unable to recover it. 00:30:54.706 [2024-07-15 15:36:04.244318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.706 [2024-07-15 15:36:04.244377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.706 [2024-07-15 15:36:04.244391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.706 [2024-07-15 15:36:04.244398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.706 [2024-07-15 15:36:04.244404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.706 [2024-07-15 15:36:04.244417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.706 qpair failed and we were unable to recover it. 00:30:54.706 [2024-07-15 15:36:04.254418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.706 [2024-07-15 15:36:04.254484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.706 [2024-07-15 15:36:04.254498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.706 [2024-07-15 15:36:04.254505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.706 [2024-07-15 15:36:04.254511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.706 [2024-07-15 15:36:04.254524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.706 qpair failed and we were unable to recover it. 00:30:54.706 [2024-07-15 15:36:04.264435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.706 [2024-07-15 15:36:04.264488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.706 [2024-07-15 15:36:04.264502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.706 [2024-07-15 15:36:04.264509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.706 [2024-07-15 15:36:04.264515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.706 [2024-07-15 15:36:04.264529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.706 qpair failed and we were unable to recover it. 00:30:54.706 [2024-07-15 15:36:04.274485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.706 [2024-07-15 15:36:04.274541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.706 [2024-07-15 15:36:04.274555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.706 [2024-07-15 15:36:04.274569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.706 [2024-07-15 15:36:04.274575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.706 [2024-07-15 15:36:04.274589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.706 qpair failed and we were unable to recover it. 00:30:54.706 [2024-07-15 15:36:04.284505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.706 [2024-07-15 15:36:04.284560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.706 [2024-07-15 15:36:04.284574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.706 [2024-07-15 15:36:04.284581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.706 [2024-07-15 15:36:04.284587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.706 [2024-07-15 15:36:04.284601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.706 qpair failed and we were unable to recover it. 00:30:54.706 [2024-07-15 15:36:04.294526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.706 [2024-07-15 15:36:04.294582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.706 [2024-07-15 15:36:04.294596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.706 [2024-07-15 15:36:04.294603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.706 [2024-07-15 15:36:04.294609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.707 [2024-07-15 15:36:04.294622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.707 qpair failed and we were unable to recover it. 00:30:54.707 [2024-07-15 15:36:04.304430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.707 [2024-07-15 15:36:04.304489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.707 [2024-07-15 15:36:04.304503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.707 [2024-07-15 15:36:04.304510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.707 [2024-07-15 15:36:04.304516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.707 [2024-07-15 15:36:04.304530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.707 qpair failed and we were unable to recover it. 00:30:54.707 [2024-07-15 15:36:04.314582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.707 [2024-07-15 15:36:04.314640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.707 [2024-07-15 15:36:04.314654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.707 [2024-07-15 15:36:04.314661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.707 [2024-07-15 15:36:04.314667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.707 [2024-07-15 15:36:04.314681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.707 qpair failed and we were unable to recover it. 00:30:54.969 [2024-07-15 15:36:04.324501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.969 [2024-07-15 15:36:04.324563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.969 [2024-07-15 15:36:04.324579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.969 [2024-07-15 15:36:04.324586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.969 [2024-07-15 15:36:04.324595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.969 [2024-07-15 15:36:04.324610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.969 qpair failed and we were unable to recover it. 00:30:54.969 [2024-07-15 15:36:04.334648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.969 [2024-07-15 15:36:04.334713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.969 [2024-07-15 15:36:04.334728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.969 [2024-07-15 15:36:04.334735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.969 [2024-07-15 15:36:04.334741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.969 [2024-07-15 15:36:04.334755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.969 qpair failed and we were unable to recover it. 00:30:54.969 [2024-07-15 15:36:04.344656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.969 [2024-07-15 15:36:04.344704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.969 [2024-07-15 15:36:04.344718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.969 [2024-07-15 15:36:04.344725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.969 [2024-07-15 15:36:04.344731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.969 [2024-07-15 15:36:04.344744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.969 qpair failed and we were unable to recover it. 00:30:54.969 [2024-07-15 15:36:04.354587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.969 [2024-07-15 15:36:04.354644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.969 [2024-07-15 15:36:04.354659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.969 [2024-07-15 15:36:04.354665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.969 [2024-07-15 15:36:04.354671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.969 [2024-07-15 15:36:04.354685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.969 qpair failed and we were unable to recover it. 00:30:54.969 [2024-07-15 15:36:04.364745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.969 [2024-07-15 15:36:04.364868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.969 [2024-07-15 15:36:04.364882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.969 [2024-07-15 15:36:04.364897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.969 [2024-07-15 15:36:04.364903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.969 [2024-07-15 15:36:04.364917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.969 qpair failed and we were unable to recover it. 00:30:54.969 [2024-07-15 15:36:04.374736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.969 [2024-07-15 15:36:04.374799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.969 [2024-07-15 15:36:04.374813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.969 [2024-07-15 15:36:04.374820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.969 [2024-07-15 15:36:04.374826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.969 [2024-07-15 15:36:04.374839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.969 qpair failed and we were unable to recover it. 00:30:54.969 [2024-07-15 15:36:04.384822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.969 [2024-07-15 15:36:04.384890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.969 [2024-07-15 15:36:04.384905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.969 [2024-07-15 15:36:04.384912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.969 [2024-07-15 15:36:04.384917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.969 [2024-07-15 15:36:04.384931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.969 qpair failed and we were unable to recover it. 00:30:54.969 [2024-07-15 15:36:04.394780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.969 [2024-07-15 15:36:04.394830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.969 [2024-07-15 15:36:04.394845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.969 [2024-07-15 15:36:04.394852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.969 [2024-07-15 15:36:04.394857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.969 [2024-07-15 15:36:04.394871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.969 qpair failed and we were unable to recover it. 00:30:54.969 [2024-07-15 15:36:04.404825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.969 [2024-07-15 15:36:04.404879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.969 [2024-07-15 15:36:04.404897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.969 [2024-07-15 15:36:04.404904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.969 [2024-07-15 15:36:04.404910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.969 [2024-07-15 15:36:04.404924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.969 qpair failed and we were unable to recover it. 00:30:54.969 [2024-07-15 15:36:04.414728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.969 [2024-07-15 15:36:04.414788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.969 [2024-07-15 15:36:04.414803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.969 [2024-07-15 15:36:04.414809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.969 [2024-07-15 15:36:04.414816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.969 [2024-07-15 15:36:04.414829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.969 qpair failed and we were unable to recover it. 00:30:54.969 [2024-07-15 15:36:04.424852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.969 [2024-07-15 15:36:04.424916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.969 [2024-07-15 15:36:04.424931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.969 [2024-07-15 15:36:04.424938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.969 [2024-07-15 15:36:04.424944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.969 [2024-07-15 15:36:04.424957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.969 qpair failed and we were unable to recover it. 00:30:54.970 [2024-07-15 15:36:04.434898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.970 [2024-07-15 15:36:04.434956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.970 [2024-07-15 15:36:04.434970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.970 [2024-07-15 15:36:04.434977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.970 [2024-07-15 15:36:04.434983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.970 [2024-07-15 15:36:04.434997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.970 qpair failed and we were unable to recover it. 00:30:54.970 [2024-07-15 15:36:04.444933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.970 [2024-07-15 15:36:04.444993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.970 [2024-07-15 15:36:04.445007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.970 [2024-07-15 15:36:04.445014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.970 [2024-07-15 15:36:04.445020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.970 [2024-07-15 15:36:04.445033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.970 qpair failed and we were unable to recover it. 00:30:54.970 [2024-07-15 15:36:04.454825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.970 [2024-07-15 15:36:04.454892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.970 [2024-07-15 15:36:04.454910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.970 [2024-07-15 15:36:04.454917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.970 [2024-07-15 15:36:04.454923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.970 [2024-07-15 15:36:04.454936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.970 qpair failed and we were unable to recover it. 00:30:54.970 [2024-07-15 15:36:04.464843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.970 [2024-07-15 15:36:04.464893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.970 [2024-07-15 15:36:04.464908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.970 [2024-07-15 15:36:04.464914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.970 [2024-07-15 15:36:04.464920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.970 [2024-07-15 15:36:04.464934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.970 qpair failed and we were unable to recover it. 00:30:54.970 [2024-07-15 15:36:04.474988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.970 [2024-07-15 15:36:04.475040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.970 [2024-07-15 15:36:04.475054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.970 [2024-07-15 15:36:04.475061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.970 [2024-07-15 15:36:04.475067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.970 [2024-07-15 15:36:04.475080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.970 qpair failed and we were unable to recover it. 00:30:54.970 [2024-07-15 15:36:04.485003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.970 [2024-07-15 15:36:04.485061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.970 [2024-07-15 15:36:04.485075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.970 [2024-07-15 15:36:04.485082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.970 [2024-07-15 15:36:04.485088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.970 [2024-07-15 15:36:04.485101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.970 qpair failed and we were unable to recover it. 00:30:54.970 [2024-07-15 15:36:04.495068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.970 [2024-07-15 15:36:04.495125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.970 [2024-07-15 15:36:04.495139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.970 [2024-07-15 15:36:04.495146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.970 [2024-07-15 15:36:04.495152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.970 [2024-07-15 15:36:04.495169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.970 qpair failed and we were unable to recover it. 00:30:54.970 [2024-07-15 15:36:04.505049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.970 [2024-07-15 15:36:04.505098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.970 [2024-07-15 15:36:04.505112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.970 [2024-07-15 15:36:04.505119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.970 [2024-07-15 15:36:04.505125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.970 [2024-07-15 15:36:04.505138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.970 qpair failed and we were unable to recover it. 00:30:54.970 [2024-07-15 15:36:04.515095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.970 [2024-07-15 15:36:04.515142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.970 [2024-07-15 15:36:04.515156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.970 [2024-07-15 15:36:04.515163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.970 [2024-07-15 15:36:04.515169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.970 [2024-07-15 15:36:04.515182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.970 qpair failed and we were unable to recover it. 00:30:54.970 [2024-07-15 15:36:04.525271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.970 [2024-07-15 15:36:04.525350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.970 [2024-07-15 15:36:04.525364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.970 [2024-07-15 15:36:04.525371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.970 [2024-07-15 15:36:04.525377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.970 [2024-07-15 15:36:04.525390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.970 qpair failed and we were unable to recover it. 00:30:54.970 [2024-07-15 15:36:04.535224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.970 [2024-07-15 15:36:04.535280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.970 [2024-07-15 15:36:04.535294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.970 [2024-07-15 15:36:04.535301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.970 [2024-07-15 15:36:04.535307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.970 [2024-07-15 15:36:04.535320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.970 qpair failed and we were unable to recover it. 00:30:54.970 [2024-07-15 15:36:04.545195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.970 [2024-07-15 15:36:04.545244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.970 [2024-07-15 15:36:04.545261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.970 [2024-07-15 15:36:04.545268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.970 [2024-07-15 15:36:04.545274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.970 [2024-07-15 15:36:04.545288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.970 qpair failed and we were unable to recover it. 00:30:54.970 [2024-07-15 15:36:04.555160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.970 [2024-07-15 15:36:04.555215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.970 [2024-07-15 15:36:04.555229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.970 [2024-07-15 15:36:04.555235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.970 [2024-07-15 15:36:04.555241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.970 [2024-07-15 15:36:04.555255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.970 qpair failed and we were unable to recover it. 00:30:54.970 [2024-07-15 15:36:04.565264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.970 [2024-07-15 15:36:04.565318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.970 [2024-07-15 15:36:04.565332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.970 [2024-07-15 15:36:04.565338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.970 [2024-07-15 15:36:04.565344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.970 [2024-07-15 15:36:04.565358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.970 qpair failed and we were unable to recover it. 00:30:54.970 [2024-07-15 15:36:04.575303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.971 [2024-07-15 15:36:04.575360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.971 [2024-07-15 15:36:04.575375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.971 [2024-07-15 15:36:04.575381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.971 [2024-07-15 15:36:04.575388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.971 [2024-07-15 15:36:04.575401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.971 qpair failed and we were unable to recover it. 00:30:54.971 [2024-07-15 15:36:04.585267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:54.971 [2024-07-15 15:36:04.585318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:54.971 [2024-07-15 15:36:04.585332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:54.971 [2024-07-15 15:36:04.585339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:54.971 [2024-07-15 15:36:04.585348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:54.971 [2024-07-15 15:36:04.585362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:54.971 qpair failed and we were unable to recover it. 00:30:55.233 [2024-07-15 15:36:04.595343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.233 [2024-07-15 15:36:04.595395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.233 [2024-07-15 15:36:04.595409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.233 [2024-07-15 15:36:04.595416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.233 [2024-07-15 15:36:04.595422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.233 [2024-07-15 15:36:04.595435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.233 qpair failed and we were unable to recover it. 00:30:55.233 [2024-07-15 15:36:04.605267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.233 [2024-07-15 15:36:04.605323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.233 [2024-07-15 15:36:04.605338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.233 [2024-07-15 15:36:04.605345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.233 [2024-07-15 15:36:04.605351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.233 [2024-07-15 15:36:04.605364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.233 qpair failed and we were unable to recover it. 00:30:55.233 [2024-07-15 15:36:04.615399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.233 [2024-07-15 15:36:04.615459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.233 [2024-07-15 15:36:04.615473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.233 [2024-07-15 15:36:04.615480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.233 [2024-07-15 15:36:04.615486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.233 [2024-07-15 15:36:04.615500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.233 qpair failed and we were unable to recover it. 00:30:55.233 [2024-07-15 15:36:04.625393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.233 [2024-07-15 15:36:04.625435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.233 [2024-07-15 15:36:04.625449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.233 [2024-07-15 15:36:04.625456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.233 [2024-07-15 15:36:04.625462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.233 [2024-07-15 15:36:04.625476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.233 qpair failed and we were unable to recover it. 00:30:55.233 [2024-07-15 15:36:04.635433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.233 [2024-07-15 15:36:04.635492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.233 [2024-07-15 15:36:04.635507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.233 [2024-07-15 15:36:04.635514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.233 [2024-07-15 15:36:04.635520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.233 [2024-07-15 15:36:04.635533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.233 qpair failed and we were unable to recover it. 00:30:55.233 [2024-07-15 15:36:04.645482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.233 [2024-07-15 15:36:04.645536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.234 [2024-07-15 15:36:04.645550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.234 [2024-07-15 15:36:04.645557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.234 [2024-07-15 15:36:04.645562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.234 [2024-07-15 15:36:04.645575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.234 qpair failed and we were unable to recover it. 00:30:55.234 [2024-07-15 15:36:04.655546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.234 [2024-07-15 15:36:04.655605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.234 [2024-07-15 15:36:04.655620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.234 [2024-07-15 15:36:04.655626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.234 [2024-07-15 15:36:04.655632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.234 [2024-07-15 15:36:04.655645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.234 qpair failed and we were unable to recover it. 00:30:55.234 [2024-07-15 15:36:04.665494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.234 [2024-07-15 15:36:04.665556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.234 [2024-07-15 15:36:04.665581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.234 [2024-07-15 15:36:04.665589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.234 [2024-07-15 15:36:04.665595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.234 [2024-07-15 15:36:04.665613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.234 qpair failed and we were unable to recover it. 00:30:55.234 [2024-07-15 15:36:04.675548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.234 [2024-07-15 15:36:04.675607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.234 [2024-07-15 15:36:04.675631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.234 [2024-07-15 15:36:04.675644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.234 [2024-07-15 15:36:04.675651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.234 [2024-07-15 15:36:04.675669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.234 qpair failed and we were unable to recover it. 00:30:55.234 [2024-07-15 15:36:04.685473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.234 [2024-07-15 15:36:04.685534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.234 [2024-07-15 15:36:04.685552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.234 [2024-07-15 15:36:04.685559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.234 [2024-07-15 15:36:04.685565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.234 [2024-07-15 15:36:04.685581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.234 qpair failed and we were unable to recover it. 00:30:55.234 [2024-07-15 15:36:04.695503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.234 [2024-07-15 15:36:04.695604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.234 [2024-07-15 15:36:04.695619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.234 [2024-07-15 15:36:04.695626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.234 [2024-07-15 15:36:04.695632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.234 [2024-07-15 15:36:04.695646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.234 qpair failed and we were unable to recover it. 00:30:55.234 [2024-07-15 15:36:04.705624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.234 [2024-07-15 15:36:04.705717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.234 [2024-07-15 15:36:04.705732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.234 [2024-07-15 15:36:04.705738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.234 [2024-07-15 15:36:04.705744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.234 [2024-07-15 15:36:04.705758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.234 qpair failed and we were unable to recover it. 00:30:55.234 [2024-07-15 15:36:04.715629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.234 [2024-07-15 15:36:04.715682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.234 [2024-07-15 15:36:04.715696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.234 [2024-07-15 15:36:04.715703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.234 [2024-07-15 15:36:04.715709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.234 [2024-07-15 15:36:04.715723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.234 qpair failed and we were unable to recover it. 00:30:55.234 [2024-07-15 15:36:04.725745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.234 [2024-07-15 15:36:04.725797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.234 [2024-07-15 15:36:04.725812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.234 [2024-07-15 15:36:04.725818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.234 [2024-07-15 15:36:04.725824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.234 [2024-07-15 15:36:04.725838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.234 qpair failed and we were unable to recover it. 00:30:55.234 [2024-07-15 15:36:04.735713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.234 [2024-07-15 15:36:04.735771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.234 [2024-07-15 15:36:04.735785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.234 [2024-07-15 15:36:04.735792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.234 [2024-07-15 15:36:04.735798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.234 [2024-07-15 15:36:04.735811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.234 qpair failed and we were unable to recover it. 00:30:55.234 [2024-07-15 15:36:04.745699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.234 [2024-07-15 15:36:04.745747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.234 [2024-07-15 15:36:04.745761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.234 [2024-07-15 15:36:04.745768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.234 [2024-07-15 15:36:04.745774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.234 [2024-07-15 15:36:04.745788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.234 qpair failed and we were unable to recover it. 00:30:55.234 [2024-07-15 15:36:04.755749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.234 [2024-07-15 15:36:04.755798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.234 [2024-07-15 15:36:04.755812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.234 [2024-07-15 15:36:04.755818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.234 [2024-07-15 15:36:04.755824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.234 [2024-07-15 15:36:04.755839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.234 qpair failed and we were unable to recover it. 00:30:55.234 [2024-07-15 15:36:04.765738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.234 [2024-07-15 15:36:04.765842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.234 [2024-07-15 15:36:04.765857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.234 [2024-07-15 15:36:04.765868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.234 [2024-07-15 15:36:04.765874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.234 [2024-07-15 15:36:04.765892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.234 qpair failed and we were unable to recover it. 00:30:55.234 [2024-07-15 15:36:04.775850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.234 [2024-07-15 15:36:04.775915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.234 [2024-07-15 15:36:04.775931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.234 [2024-07-15 15:36:04.775938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.234 [2024-07-15 15:36:04.775944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.234 [2024-07-15 15:36:04.775958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.234 qpair failed and we were unable to recover it. 00:30:55.234 [2024-07-15 15:36:04.785828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.234 [2024-07-15 15:36:04.785874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.234 [2024-07-15 15:36:04.785891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.234 [2024-07-15 15:36:04.785898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.234 [2024-07-15 15:36:04.785904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.235 [2024-07-15 15:36:04.785918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.235 qpair failed and we were unable to recover it. 00:30:55.235 [2024-07-15 15:36:04.795712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.235 [2024-07-15 15:36:04.795759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.235 [2024-07-15 15:36:04.795773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.235 [2024-07-15 15:36:04.795780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.235 [2024-07-15 15:36:04.795786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.235 [2024-07-15 15:36:04.795801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.235 qpair failed and we were unable to recover it. 00:30:55.235 [2024-07-15 15:36:04.805851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.235 [2024-07-15 15:36:04.805903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.235 [2024-07-15 15:36:04.805918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.235 [2024-07-15 15:36:04.805925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.235 [2024-07-15 15:36:04.805931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.235 [2024-07-15 15:36:04.805945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.235 qpair failed and we were unable to recover it. 00:30:55.235 [2024-07-15 15:36:04.815903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.235 [2024-07-15 15:36:04.815955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.235 [2024-07-15 15:36:04.815969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.235 [2024-07-15 15:36:04.815976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.235 [2024-07-15 15:36:04.815981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.235 [2024-07-15 15:36:04.815996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.235 qpair failed and we were unable to recover it. 00:30:55.235 [2024-07-15 15:36:04.825981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.235 [2024-07-15 15:36:04.826037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.235 [2024-07-15 15:36:04.826051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.235 [2024-07-15 15:36:04.826058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.235 [2024-07-15 15:36:04.826064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.235 [2024-07-15 15:36:04.826078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.235 qpair failed and we were unable to recover it. 00:30:55.235 [2024-07-15 15:36:04.836012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.235 [2024-07-15 15:36:04.836087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.235 [2024-07-15 15:36:04.836101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.235 [2024-07-15 15:36:04.836108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.235 [2024-07-15 15:36:04.836114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.235 [2024-07-15 15:36:04.836127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.235 qpair failed and we were unable to recover it. 00:30:55.235 [2024-07-15 15:36:04.846000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.235 [2024-07-15 15:36:04.846048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.235 [2024-07-15 15:36:04.846063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.235 [2024-07-15 15:36:04.846070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.235 [2024-07-15 15:36:04.846076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.235 [2024-07-15 15:36:04.846090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.235 qpair failed and we were unable to recover it. 00:30:55.497 [2024-07-15 15:36:04.856020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.497 [2024-07-15 15:36:04.856077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.497 [2024-07-15 15:36:04.856099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.497 [2024-07-15 15:36:04.856105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.497 [2024-07-15 15:36:04.856111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.497 [2024-07-15 15:36:04.856125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.497 qpair failed and we were unable to recover it. 00:30:55.497 [2024-07-15 15:36:04.866033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.497 [2024-07-15 15:36:04.866079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.497 [2024-07-15 15:36:04.866094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.497 [2024-07-15 15:36:04.866100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.497 [2024-07-15 15:36:04.866106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.497 [2024-07-15 15:36:04.866120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.497 qpair failed and we were unable to recover it. 00:30:55.497 [2024-07-15 15:36:04.875967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.497 [2024-07-15 15:36:04.876016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.497 [2024-07-15 15:36:04.876030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.497 [2024-07-15 15:36:04.876037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.497 [2024-07-15 15:36:04.876045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.497 [2024-07-15 15:36:04.876059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.497 qpair failed and we were unable to recover it. 00:30:55.497 [2024-07-15 15:36:04.885983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.497 [2024-07-15 15:36:04.886032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.497 [2024-07-15 15:36:04.886046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.497 [2024-07-15 15:36:04.886053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.497 [2024-07-15 15:36:04.886059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.497 [2024-07-15 15:36:04.886073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.497 qpair failed and we were unable to recover it. 00:30:55.497 [2024-07-15 15:36:04.896144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.497 [2024-07-15 15:36:04.896194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.497 [2024-07-15 15:36:04.896208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.497 [2024-07-15 15:36:04.896215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.497 [2024-07-15 15:36:04.896221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.497 [2024-07-15 15:36:04.896239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.497 qpair failed and we were unable to recover it. 00:30:55.497 [2024-07-15 15:36:04.906153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.497 [2024-07-15 15:36:04.906202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.497 [2024-07-15 15:36:04.906216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.497 [2024-07-15 15:36:04.906223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.497 [2024-07-15 15:36:04.906229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.497 [2024-07-15 15:36:04.906242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.497 qpair failed and we were unable to recover it. 00:30:55.497 [2024-07-15 15:36:04.916167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.497 [2024-07-15 15:36:04.916210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.497 [2024-07-15 15:36:04.916224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.497 [2024-07-15 15:36:04.916230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.497 [2024-07-15 15:36:04.916236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.497 [2024-07-15 15:36:04.916250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.497 qpair failed and we were unable to recover it. 00:30:55.497 [2024-07-15 15:36:04.926082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.497 [2024-07-15 15:36:04.926129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.497 [2024-07-15 15:36:04.926143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.497 [2024-07-15 15:36:04.926150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.497 [2024-07-15 15:36:04.926156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.497 [2024-07-15 15:36:04.926169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.497 qpair failed and we were unable to recover it. 00:30:55.497 [2024-07-15 15:36:04.936227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.497 [2024-07-15 15:36:04.936282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.497 [2024-07-15 15:36:04.936296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.497 [2024-07-15 15:36:04.936303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.497 [2024-07-15 15:36:04.936309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.497 [2024-07-15 15:36:04.936322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.497 qpair failed and we were unable to recover it. 00:30:55.497 [2024-07-15 15:36:04.946251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.497 [2024-07-15 15:36:04.946299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.497 [2024-07-15 15:36:04.946317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.497 [2024-07-15 15:36:04.946323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.497 [2024-07-15 15:36:04.946329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.497 [2024-07-15 15:36:04.946343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.497 qpair failed and we were unable to recover it. 00:30:55.497 [2024-07-15 15:36:04.956286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.497 [2024-07-15 15:36:04.956330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.497 [2024-07-15 15:36:04.956344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.498 [2024-07-15 15:36:04.956351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.498 [2024-07-15 15:36:04.956357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.498 [2024-07-15 15:36:04.956371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.498 qpair failed and we were unable to recover it. 00:30:55.498 [2024-07-15 15:36:04.966315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.498 [2024-07-15 15:36:04.966362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.498 [2024-07-15 15:36:04.966376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.498 [2024-07-15 15:36:04.966383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.498 [2024-07-15 15:36:04.966389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.498 [2024-07-15 15:36:04.966402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.498 qpair failed and we were unable to recover it. 00:30:55.498 [2024-07-15 15:36:04.976318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.498 [2024-07-15 15:36:04.976370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.498 [2024-07-15 15:36:04.976384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.498 [2024-07-15 15:36:04.976391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.498 [2024-07-15 15:36:04.976397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.498 [2024-07-15 15:36:04.976410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.498 qpair failed and we were unable to recover it. 00:30:55.498 [2024-07-15 15:36:04.986332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.498 [2024-07-15 15:36:04.986378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.498 [2024-07-15 15:36:04.986392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.498 [2024-07-15 15:36:04.986399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.498 [2024-07-15 15:36:04.986408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.498 [2024-07-15 15:36:04.986422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.498 qpair failed and we were unable to recover it. 00:30:55.498 [2024-07-15 15:36:04.996350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.498 [2024-07-15 15:36:04.996396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.498 [2024-07-15 15:36:04.996411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.498 [2024-07-15 15:36:04.996418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.498 [2024-07-15 15:36:04.996423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.498 [2024-07-15 15:36:04.996439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.498 qpair failed and we were unable to recover it. 00:30:55.498 [2024-07-15 15:36:05.006288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.498 [2024-07-15 15:36:05.006359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.498 [2024-07-15 15:36:05.006373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.498 [2024-07-15 15:36:05.006380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.498 [2024-07-15 15:36:05.006386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.498 [2024-07-15 15:36:05.006400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.498 qpair failed and we were unable to recover it. 00:30:55.498 [2024-07-15 15:36:05.016456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.498 [2024-07-15 15:36:05.016509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.498 [2024-07-15 15:36:05.016523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.498 [2024-07-15 15:36:05.016530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.498 [2024-07-15 15:36:05.016536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.498 [2024-07-15 15:36:05.016550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.498 qpair failed and we were unable to recover it. 00:30:55.498 [2024-07-15 15:36:05.026474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.498 [2024-07-15 15:36:05.026521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.498 [2024-07-15 15:36:05.026535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.498 [2024-07-15 15:36:05.026542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.498 [2024-07-15 15:36:05.026548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.498 [2024-07-15 15:36:05.026561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.498 qpair failed and we were unable to recover it. 00:30:55.498 [2024-07-15 15:36:05.036483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.498 [2024-07-15 15:36:05.036533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.498 [2024-07-15 15:36:05.036547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.498 [2024-07-15 15:36:05.036553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.498 [2024-07-15 15:36:05.036559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.498 [2024-07-15 15:36:05.036573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.498 qpair failed and we were unable to recover it. 00:30:55.498 [2024-07-15 15:36:05.046526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.498 [2024-07-15 15:36:05.046573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.498 [2024-07-15 15:36:05.046588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.498 [2024-07-15 15:36:05.046595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.498 [2024-07-15 15:36:05.046601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.498 [2024-07-15 15:36:05.046614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.498 qpair failed and we were unable to recover it. 00:30:55.498 [2024-07-15 15:36:05.056513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.498 [2024-07-15 15:36:05.056566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.498 [2024-07-15 15:36:05.056580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.498 [2024-07-15 15:36:05.056587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.498 [2024-07-15 15:36:05.056593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.498 [2024-07-15 15:36:05.056606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.498 qpair failed and we were unable to recover it. 00:30:55.498 [2024-07-15 15:36:05.066591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.498 [2024-07-15 15:36:05.066638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.498 [2024-07-15 15:36:05.066652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.498 [2024-07-15 15:36:05.066659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.498 [2024-07-15 15:36:05.066665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.498 [2024-07-15 15:36:05.066678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.498 qpair failed and we were unable to recover it. 00:30:55.498 [2024-07-15 15:36:05.076487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.498 [2024-07-15 15:36:05.076534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.498 [2024-07-15 15:36:05.076549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.498 [2024-07-15 15:36:05.076556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.498 [2024-07-15 15:36:05.076565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.498 [2024-07-15 15:36:05.076579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.498 qpair failed and we were unable to recover it. 00:30:55.498 [2024-07-15 15:36:05.086624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.498 [2024-07-15 15:36:05.086670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.498 [2024-07-15 15:36:05.086684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.498 [2024-07-15 15:36:05.086691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.498 [2024-07-15 15:36:05.086697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.498 [2024-07-15 15:36:05.086711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.498 qpair failed and we were unable to recover it. 00:30:55.498 [2024-07-15 15:36:05.096644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.498 [2024-07-15 15:36:05.096697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.498 [2024-07-15 15:36:05.096711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.498 [2024-07-15 15:36:05.096718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.498 [2024-07-15 15:36:05.096723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.499 [2024-07-15 15:36:05.096737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.499 qpair failed and we were unable to recover it. 00:30:55.499 [2024-07-15 15:36:05.106700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.499 [2024-07-15 15:36:05.106746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.499 [2024-07-15 15:36:05.106760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.499 [2024-07-15 15:36:05.106766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.499 [2024-07-15 15:36:05.106772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.499 [2024-07-15 15:36:05.106785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.499 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.116700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.116747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.116761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.116768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.116774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.116788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.126748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.126809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.126824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.126831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.126836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.126850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.136784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.136879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.136897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.136904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.136910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.136924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.146800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.146848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.146862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.146869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.146875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.146892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.156829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.156874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.156893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.156900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.156906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.156919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.166863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.166928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.166943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.166953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.166959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.166972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.176894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.176947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.176962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.176969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.176975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.176989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.186915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.186962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.186977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.186984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.186990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.187004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.196948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.196997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.197011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.197017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.197023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.197037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.206970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.207021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.207036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.207043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.207049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.207064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.216992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.217045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.217060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.217067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.217073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.217086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.227021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.227072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.227087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.227094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.227100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.227113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.236946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.236994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.237008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.237015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.237021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.237035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.247084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.247139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.247153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.247160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.247166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.247179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.256974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.257023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.257040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.257047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.257053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.257067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.267136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.267181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.267196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.267204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.267211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.267225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.277166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.277213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.277227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.277234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.277240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.277254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.287187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.287237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.287252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.287259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.287266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.287280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.297217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.297266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.297280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.297287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.761 [2024-07-15 15:36:05.297293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.761 [2024-07-15 15:36:05.297310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-07-15 15:36:05.307121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.761 [2024-07-15 15:36:05.307171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.761 [2024-07-15 15:36:05.307186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.761 [2024-07-15 15:36:05.307192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.762 [2024-07-15 15:36:05.307198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.762 [2024-07-15 15:36:05.307212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-07-15 15:36:05.317275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.762 [2024-07-15 15:36:05.317323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.762 [2024-07-15 15:36:05.317337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.762 [2024-07-15 15:36:05.317344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.762 [2024-07-15 15:36:05.317350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.762 [2024-07-15 15:36:05.317363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-07-15 15:36:05.327167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.762 [2024-07-15 15:36:05.327218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.762 [2024-07-15 15:36:05.327233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.762 [2024-07-15 15:36:05.327239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.762 [2024-07-15 15:36:05.327245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.762 [2024-07-15 15:36:05.327259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-07-15 15:36:05.337211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.762 [2024-07-15 15:36:05.337262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.762 [2024-07-15 15:36:05.337277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.762 [2024-07-15 15:36:05.337284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.762 [2024-07-15 15:36:05.337290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.762 [2024-07-15 15:36:05.337303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-07-15 15:36:05.347356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.762 [2024-07-15 15:36:05.347453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.762 [2024-07-15 15:36:05.347470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.762 [2024-07-15 15:36:05.347477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.762 [2024-07-15 15:36:05.347483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.762 [2024-07-15 15:36:05.347497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-07-15 15:36:05.357350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.762 [2024-07-15 15:36:05.357396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.762 [2024-07-15 15:36:05.357410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.762 [2024-07-15 15:36:05.357417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.762 [2024-07-15 15:36:05.357423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.762 [2024-07-15 15:36:05.357437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-07-15 15:36:05.367401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.762 [2024-07-15 15:36:05.367451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.762 [2024-07-15 15:36:05.367465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.762 [2024-07-15 15:36:05.367472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.762 [2024-07-15 15:36:05.367478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.762 [2024-07-15 15:36:05.367491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-07-15 15:36:05.377447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:55.762 [2024-07-15 15:36:05.377496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:55.762 [2024-07-15 15:36:05.377511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:55.762 [2024-07-15 15:36:05.377517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:55.762 [2024-07-15 15:36:05.377523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:55.762 [2024-07-15 15:36:05.377537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:55.762 qpair failed and we were unable to recover it. 00:30:56.022 [2024-07-15 15:36:05.387431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.022 [2024-07-15 15:36:05.387480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.022 [2024-07-15 15:36:05.387495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.022 [2024-07-15 15:36:05.387502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.022 [2024-07-15 15:36:05.387511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.022 [2024-07-15 15:36:05.387525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.022 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.397462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.397507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.397521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.397528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.397534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.397548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.407485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.407534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.407549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.407556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.407562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.407575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.417529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.417587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.417612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.417621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.417628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.417646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.427571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.427655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.427670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.427678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.427684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.427699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.437581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.437635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.437659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.437668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.437675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.437694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.447603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.447660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.447684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.447692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.447699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.447717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.457654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.457715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.457733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.457743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.457749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.457764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.467682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.467732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.467747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.467754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.467760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.467774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.477695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.477739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.477754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.477761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.477771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.477785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.487729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.487782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.487797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.487803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.487809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.487823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.497749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.497806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.497821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.497827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.497833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.497847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.507791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.507879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.507898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.507904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.507910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.507924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.517822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.517864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.517879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.517889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.517895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.517909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.527733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.527781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.527795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.527802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.527808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.527822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.537872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.537934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.537948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.537955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.537961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.537975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.547794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.547847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.547861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.547868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.547874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.547892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.557919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.557965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.557980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.557986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.557992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.558006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.567952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.568002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.568016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.568027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.568033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.568047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.577845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.577899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.577914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.577921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.577927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.577940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.587868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.587915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.587930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.587936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.587942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.587956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.598048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.598094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.598109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.598116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.598122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.598136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.608060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.608110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.608125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.608131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.608137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.608151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.618016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.618066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.618080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.618087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.618093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.618107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.628079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.628126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.023 [2024-07-15 15:36:05.628140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.023 [2024-07-15 15:36:05.628147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.023 [2024-07-15 15:36:05.628153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.023 [2024-07-15 15:36:05.628167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.023 qpair failed and we were unable to recover it. 00:30:56.023 [2024-07-15 15:36:05.638144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.023 [2024-07-15 15:36:05.638189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.024 [2024-07-15 15:36:05.638203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.024 [2024-07-15 15:36:05.638209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.024 [2024-07-15 15:36:05.638215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.024 [2024-07-15 15:36:05.638229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.024 qpair failed and we were unable to recover it. 00:30:56.287 [2024-07-15 15:36:05.648135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.287 [2024-07-15 15:36:05.648181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.287 [2024-07-15 15:36:05.648195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.287 [2024-07-15 15:36:05.648202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.287 [2024-07-15 15:36:05.648208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.287 [2024-07-15 15:36:05.648222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-07-15 15:36:05.658054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.287 [2024-07-15 15:36:05.658103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.287 [2024-07-15 15:36:05.658121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.287 [2024-07-15 15:36:05.658128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.287 [2024-07-15 15:36:05.658134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.287 [2024-07-15 15:36:05.658148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-07-15 15:36:05.668227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.287 [2024-07-15 15:36:05.668274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.287 [2024-07-15 15:36:05.668288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.288 [2024-07-15 15:36:05.668295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.288 [2024-07-15 15:36:05.668301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.288 [2024-07-15 15:36:05.668314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-07-15 15:36:05.678248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.288 [2024-07-15 15:36:05.678294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.288 [2024-07-15 15:36:05.678308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.288 [2024-07-15 15:36:05.678315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.288 [2024-07-15 15:36:05.678321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.288 [2024-07-15 15:36:05.678335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-07-15 15:36:05.688277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.288 [2024-07-15 15:36:05.688326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.288 [2024-07-15 15:36:05.688341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.288 [2024-07-15 15:36:05.688347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.288 [2024-07-15 15:36:05.688353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.288 [2024-07-15 15:36:05.688367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-07-15 15:36:05.698292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.288 [2024-07-15 15:36:05.698342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.288 [2024-07-15 15:36:05.698356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.288 [2024-07-15 15:36:05.698363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.288 [2024-07-15 15:36:05.698369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.288 [2024-07-15 15:36:05.698386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-07-15 15:36:05.708327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.288 [2024-07-15 15:36:05.708381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.288 [2024-07-15 15:36:05.708395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.288 [2024-07-15 15:36:05.708402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.288 [2024-07-15 15:36:05.708408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.288 [2024-07-15 15:36:05.708421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-07-15 15:36:05.718292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.288 [2024-07-15 15:36:05.718336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.288 [2024-07-15 15:36:05.718350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.288 [2024-07-15 15:36:05.718357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.288 [2024-07-15 15:36:05.718363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.288 [2024-07-15 15:36:05.718377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-07-15 15:36:05.728333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.288 [2024-07-15 15:36:05.728382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.288 [2024-07-15 15:36:05.728396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.288 [2024-07-15 15:36:05.728402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.288 [2024-07-15 15:36:05.728408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.288 [2024-07-15 15:36:05.728421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-07-15 15:36:05.738393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.288 [2024-07-15 15:36:05.738449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.288 [2024-07-15 15:36:05.738464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.288 [2024-07-15 15:36:05.738470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.288 [2024-07-15 15:36:05.738476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.288 [2024-07-15 15:36:05.738490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-07-15 15:36:05.748334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.288 [2024-07-15 15:36:05.748382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.288 [2024-07-15 15:36:05.748401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.288 [2024-07-15 15:36:05.748408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.288 [2024-07-15 15:36:05.748414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.288 [2024-07-15 15:36:05.748429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-07-15 15:36:05.758432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.288 [2024-07-15 15:36:05.758478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.288 [2024-07-15 15:36:05.758493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.288 [2024-07-15 15:36:05.758499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.288 [2024-07-15 15:36:05.758505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.288 [2024-07-15 15:36:05.758519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-07-15 15:36:05.768477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.288 [2024-07-15 15:36:05.768526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.288 [2024-07-15 15:36:05.768541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.288 [2024-07-15 15:36:05.768547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.288 [2024-07-15 15:36:05.768553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.288 [2024-07-15 15:36:05.768567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-07-15 15:36:05.778500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.288 [2024-07-15 15:36:05.778588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.288 [2024-07-15 15:36:05.778603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.288 [2024-07-15 15:36:05.778610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.288 [2024-07-15 15:36:05.778616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.288 [2024-07-15 15:36:05.778629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-07-15 15:36:05.788540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.288 [2024-07-15 15:36:05.788589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.288 [2024-07-15 15:36:05.788604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.288 [2024-07-15 15:36:05.788610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.288 [2024-07-15 15:36:05.788616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.288 [2024-07-15 15:36:05.788634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-07-15 15:36:05.798567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.288 [2024-07-15 15:36:05.798617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.288 [2024-07-15 15:36:05.798631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.288 [2024-07-15 15:36:05.798638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.288 [2024-07-15 15:36:05.798643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.288 [2024-07-15 15:36:05.798657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-07-15 15:36:05.808588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.288 [2024-07-15 15:36:05.808683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.289 [2024-07-15 15:36:05.808707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.289 [2024-07-15 15:36:05.808716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.289 [2024-07-15 15:36:05.808723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.289 [2024-07-15 15:36:05.808741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-07-15 15:36:05.818669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.289 [2024-07-15 15:36:05.818724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.289 [2024-07-15 15:36:05.818740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.289 [2024-07-15 15:36:05.818747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.289 [2024-07-15 15:36:05.818753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.289 [2024-07-15 15:36:05.818767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-07-15 15:36:05.828508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.289 [2024-07-15 15:36:05.828558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.289 [2024-07-15 15:36:05.828573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.289 [2024-07-15 15:36:05.828580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.289 [2024-07-15 15:36:05.828586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.289 [2024-07-15 15:36:05.828600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-07-15 15:36:05.838682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.289 [2024-07-15 15:36:05.838732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.289 [2024-07-15 15:36:05.838747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.289 [2024-07-15 15:36:05.838753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.289 [2024-07-15 15:36:05.838759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.289 [2024-07-15 15:36:05.838773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-07-15 15:36:05.848695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.289 [2024-07-15 15:36:05.848787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.289 [2024-07-15 15:36:05.848802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.289 [2024-07-15 15:36:05.848809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.289 [2024-07-15 15:36:05.848815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.289 [2024-07-15 15:36:05.848829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-07-15 15:36:05.858694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.289 [2024-07-15 15:36:05.858746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.289 [2024-07-15 15:36:05.858761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.289 [2024-07-15 15:36:05.858767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.289 [2024-07-15 15:36:05.858773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.289 [2024-07-15 15:36:05.858787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-07-15 15:36:05.868757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.289 [2024-07-15 15:36:05.868801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.289 [2024-07-15 15:36:05.868815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.289 [2024-07-15 15:36:05.868822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.289 [2024-07-15 15:36:05.868828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.289 [2024-07-15 15:36:05.868842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-07-15 15:36:05.878783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.289 [2024-07-15 15:36:05.878832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.289 [2024-07-15 15:36:05.878846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.289 [2024-07-15 15:36:05.878853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.289 [2024-07-15 15:36:05.878863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.289 [2024-07-15 15:36:05.878877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-07-15 15:36:05.888687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.289 [2024-07-15 15:36:05.888734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.289 [2024-07-15 15:36:05.888749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.289 [2024-07-15 15:36:05.888756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.289 [2024-07-15 15:36:05.888762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.289 [2024-07-15 15:36:05.888776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-07-15 15:36:05.898846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.289 [2024-07-15 15:36:05.898923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.289 [2024-07-15 15:36:05.898938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.289 [2024-07-15 15:36:05.898944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.289 [2024-07-15 15:36:05.898950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.289 [2024-07-15 15:36:05.898965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.550 [2024-07-15 15:36:05.908833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.550 [2024-07-15 15:36:05.908888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.550 [2024-07-15 15:36:05.908903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.550 [2024-07-15 15:36:05.908910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.550 [2024-07-15 15:36:05.908916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.550 [2024-07-15 15:36:05.908930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.550 qpair failed and we were unable to recover it. 00:30:56.550 [2024-07-15 15:36:05.918996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.550 [2024-07-15 15:36:05.919048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.550 [2024-07-15 15:36:05.919063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.550 [2024-07-15 15:36:05.919070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.550 [2024-07-15 15:36:05.919077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.550 [2024-07-15 15:36:05.919091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.550 qpair failed and we were unable to recover it. 00:30:56.550 [2024-07-15 15:36:05.928994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.550 [2024-07-15 15:36:05.929051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.550 [2024-07-15 15:36:05.929065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.550 [2024-07-15 15:36:05.929072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.550 [2024-07-15 15:36:05.929078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.550 [2024-07-15 15:36:05.929091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.550 qpair failed and we were unable to recover it. 00:30:56.550 [2024-07-15 15:36:05.938976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.550 [2024-07-15 15:36:05.939036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.550 [2024-07-15 15:36:05.939050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.550 [2024-07-15 15:36:05.939057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.550 [2024-07-15 15:36:05.939063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.550 [2024-07-15 15:36:05.939076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.550 qpair failed and we were unable to recover it. 00:30:56.550 [2024-07-15 15:36:05.948987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.550 [2024-07-15 15:36:05.949036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.551 [2024-07-15 15:36:05.949051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.551 [2024-07-15 15:36:05.949057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.551 [2024-07-15 15:36:05.949063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.551 [2024-07-15 15:36:05.949076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.551 qpair failed and we were unable to recover it. 00:30:56.551 [2024-07-15 15:36:05.959044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.551 [2024-07-15 15:36:05.959091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.551 [2024-07-15 15:36:05.959105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.551 [2024-07-15 15:36:05.959112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.551 [2024-07-15 15:36:05.959118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.551 [2024-07-15 15:36:05.959132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.551 qpair failed and we were unable to recover it. 00:30:56.551 [2024-07-15 15:36:05.969031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.551 [2024-07-15 15:36:05.969079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.551 [2024-07-15 15:36:05.969093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.551 [2024-07-15 15:36:05.969104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.551 [2024-07-15 15:36:05.969110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.551 [2024-07-15 15:36:05.969123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.551 qpair failed and we were unable to recover it. 00:30:56.551 [2024-07-15 15:36:05.979063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.551 [2024-07-15 15:36:05.979129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.551 [2024-07-15 15:36:05.979144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.551 [2024-07-15 15:36:05.979151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.551 [2024-07-15 15:36:05.979156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.551 [2024-07-15 15:36:05.979171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.551 qpair failed and we were unable to recover it. 00:30:56.551 [2024-07-15 15:36:05.989121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.551 [2024-07-15 15:36:05.989165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.551 [2024-07-15 15:36:05.989179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.551 [2024-07-15 15:36:05.989186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.551 [2024-07-15 15:36:05.989192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.551 [2024-07-15 15:36:05.989206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.551 qpair failed and we were unable to recover it. 00:30:56.551 [2024-07-15 15:36:05.999059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.551 [2024-07-15 15:36:05.999115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.551 [2024-07-15 15:36:05.999129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.551 [2024-07-15 15:36:05.999136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.551 [2024-07-15 15:36:05.999142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.551 [2024-07-15 15:36:05.999155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.551 qpair failed and we were unable to recover it. 00:30:56.551 [2024-07-15 15:36:06.009154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.551 [2024-07-15 15:36:06.009203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.551 [2024-07-15 15:36:06.009217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.551 [2024-07-15 15:36:06.009224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.551 [2024-07-15 15:36:06.009230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.551 [2024-07-15 15:36:06.009243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.551 qpair failed and we were unable to recover it. 00:30:56.551 [2024-07-15 15:36:06.019155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.551 [2024-07-15 15:36:06.019210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.551 [2024-07-15 15:36:06.019224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.551 [2024-07-15 15:36:06.019231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.551 [2024-07-15 15:36:06.019237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.551 [2024-07-15 15:36:06.019250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.551 qpair failed and we were unable to recover it. 00:30:56.551 [2024-07-15 15:36:06.029192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.551 [2024-07-15 15:36:06.029244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.551 [2024-07-15 15:36:06.029258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.551 [2024-07-15 15:36:06.029264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.551 [2024-07-15 15:36:06.029270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.551 [2024-07-15 15:36:06.029284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.551 qpair failed and we were unable to recover it. 00:30:56.551 [2024-07-15 15:36:06.039195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.551 [2024-07-15 15:36:06.039244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.551 [2024-07-15 15:36:06.039258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.551 [2024-07-15 15:36:06.039264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.551 [2024-07-15 15:36:06.039270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.551 [2024-07-15 15:36:06.039284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.551 qpair failed and we were unable to recover it. 00:30:56.551 [2024-07-15 15:36:06.049255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.551 [2024-07-15 15:36:06.049303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.551 [2024-07-15 15:36:06.049317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.551 [2024-07-15 15:36:06.049324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.551 [2024-07-15 15:36:06.049330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.551 [2024-07-15 15:36:06.049343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.551 qpair failed and we were unable to recover it. 00:30:56.551 [2024-07-15 15:36:06.059275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.551 [2024-07-15 15:36:06.059325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.551 [2024-07-15 15:36:06.059339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.551 [2024-07-15 15:36:06.059350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.551 [2024-07-15 15:36:06.059356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.551 [2024-07-15 15:36:06.059370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.551 qpair failed and we were unable to recover it. 00:30:56.551 [2024-07-15 15:36:06.069277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.551 [2024-07-15 15:36:06.069321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.551 [2024-07-15 15:36:06.069335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.551 [2024-07-15 15:36:06.069341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.551 [2024-07-15 15:36:06.069347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.551 [2024-07-15 15:36:06.069361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.551 qpair failed and we were unable to recover it. 00:30:56.551 [2024-07-15 15:36:06.079321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.551 [2024-07-15 15:36:06.079366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.551 [2024-07-15 15:36:06.079380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.551 [2024-07-15 15:36:06.079386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.551 [2024-07-15 15:36:06.079392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.551 [2024-07-15 15:36:06.079406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.551 qpair failed and we were unable to recover it. 00:30:56.551 [2024-07-15 15:36:06.089362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.551 [2024-07-15 15:36:06.089408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.551 [2024-07-15 15:36:06.089422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.551 [2024-07-15 15:36:06.089428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.551 [2024-07-15 15:36:06.089434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.552 [2024-07-15 15:36:06.089448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.552 qpair failed and we were unable to recover it. 00:30:56.552 [2024-07-15 15:36:06.099377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.552 [2024-07-15 15:36:06.099427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.552 [2024-07-15 15:36:06.099441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.552 [2024-07-15 15:36:06.099448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.552 [2024-07-15 15:36:06.099454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.552 [2024-07-15 15:36:06.099467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.552 qpair failed and we were unable to recover it. 00:30:56.552 [2024-07-15 15:36:06.109408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.552 [2024-07-15 15:36:06.109453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.552 [2024-07-15 15:36:06.109467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.552 [2024-07-15 15:36:06.109474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.552 [2024-07-15 15:36:06.109480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.552 [2024-07-15 15:36:06.109493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.552 qpair failed and we were unable to recover it. 00:30:56.552 [2024-07-15 15:36:06.119399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.552 [2024-07-15 15:36:06.119448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.552 [2024-07-15 15:36:06.119462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.552 [2024-07-15 15:36:06.119468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.552 [2024-07-15 15:36:06.119475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.552 [2024-07-15 15:36:06.119488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.552 qpair failed and we were unable to recover it. 00:30:56.552 [2024-07-15 15:36:06.129453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.552 [2024-07-15 15:36:06.129499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.552 [2024-07-15 15:36:06.129513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.552 [2024-07-15 15:36:06.129519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.552 [2024-07-15 15:36:06.129525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.552 [2024-07-15 15:36:06.129538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.552 qpair failed and we were unable to recover it. 00:30:56.552 [2024-07-15 15:36:06.139489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.552 [2024-07-15 15:36:06.139540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.552 [2024-07-15 15:36:06.139554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.552 [2024-07-15 15:36:06.139561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.552 [2024-07-15 15:36:06.139567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.552 [2024-07-15 15:36:06.139581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.552 qpair failed and we were unable to recover it. 00:30:56.552 [2024-07-15 15:36:06.149386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.552 [2024-07-15 15:36:06.149433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.552 [2024-07-15 15:36:06.149451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.552 [2024-07-15 15:36:06.149457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.552 [2024-07-15 15:36:06.149465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.552 [2024-07-15 15:36:06.149479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.552 qpair failed and we were unable to recover it. 00:30:56.552 [2024-07-15 15:36:06.159527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.552 [2024-07-15 15:36:06.159570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.552 [2024-07-15 15:36:06.159585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.552 [2024-07-15 15:36:06.159592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.552 [2024-07-15 15:36:06.159598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.552 [2024-07-15 15:36:06.159611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.552 qpair failed and we were unable to recover it. 00:30:56.813 [2024-07-15 15:36:06.169568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.813 [2024-07-15 15:36:06.169615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.813 [2024-07-15 15:36:06.169630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.813 [2024-07-15 15:36:06.169636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.813 [2024-07-15 15:36:06.169642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.813 [2024-07-15 15:36:06.169656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.813 qpair failed and we were unable to recover it. 00:30:56.813 [2024-07-15 15:36:06.179583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.813 [2024-07-15 15:36:06.179631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.813 [2024-07-15 15:36:06.179645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.813 [2024-07-15 15:36:06.179652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.813 [2024-07-15 15:36:06.179658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.813 [2024-07-15 15:36:06.179672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.813 qpair failed and we were unable to recover it. 00:30:56.813 [2024-07-15 15:36:06.189619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.813 [2024-07-15 15:36:06.189666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.813 [2024-07-15 15:36:06.189680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.813 [2024-07-15 15:36:06.189687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.813 [2024-07-15 15:36:06.189693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.813 [2024-07-15 15:36:06.189711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.813 qpair failed and we were unable to recover it. 00:30:56.813 [2024-07-15 15:36:06.199654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.813 [2024-07-15 15:36:06.199697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.813 [2024-07-15 15:36:06.199711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.813 [2024-07-15 15:36:06.199718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.813 [2024-07-15 15:36:06.199724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.813 [2024-07-15 15:36:06.199737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.813 qpair failed and we were unable to recover it. 00:30:56.813 [2024-07-15 15:36:06.209703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.813 [2024-07-15 15:36:06.209749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.813 [2024-07-15 15:36:06.209763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.813 [2024-07-15 15:36:06.209770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.813 [2024-07-15 15:36:06.209776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.814 [2024-07-15 15:36:06.209789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.814 qpair failed and we were unable to recover it. 00:30:56.814 [2024-07-15 15:36:06.219692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.814 [2024-07-15 15:36:06.219748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.814 [2024-07-15 15:36:06.219762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.814 [2024-07-15 15:36:06.219769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.814 [2024-07-15 15:36:06.219775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.814 [2024-07-15 15:36:06.219788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.814 qpair failed and we were unable to recover it. 00:30:56.814 [2024-07-15 15:36:06.229731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.814 [2024-07-15 15:36:06.229782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.814 [2024-07-15 15:36:06.229796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.814 [2024-07-15 15:36:06.229803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.814 [2024-07-15 15:36:06.229809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.814 [2024-07-15 15:36:06.229822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.814 qpair failed and we were unable to recover it. 00:30:56.814 [2024-07-15 15:36:06.239749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.814 [2024-07-15 15:36:06.239797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.814 [2024-07-15 15:36:06.239815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.814 [2024-07-15 15:36:06.239822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.814 [2024-07-15 15:36:06.239828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.814 [2024-07-15 15:36:06.239841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.814 qpair failed and we were unable to recover it. 00:30:56.814 [2024-07-15 15:36:06.249766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.814 [2024-07-15 15:36:06.249815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.814 [2024-07-15 15:36:06.249829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.814 [2024-07-15 15:36:06.249836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.814 [2024-07-15 15:36:06.249842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.814 [2024-07-15 15:36:06.249855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.814 qpair failed and we were unable to recover it. 00:30:56.814 [2024-07-15 15:36:06.259795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.814 [2024-07-15 15:36:06.259847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.814 [2024-07-15 15:36:06.259861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.814 [2024-07-15 15:36:06.259867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.814 [2024-07-15 15:36:06.259873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.814 [2024-07-15 15:36:06.259890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.814 qpair failed and we were unable to recover it. 00:30:56.814 [2024-07-15 15:36:06.269893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.814 [2024-07-15 15:36:06.270020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.814 [2024-07-15 15:36:06.270034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.814 [2024-07-15 15:36:06.270041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.814 [2024-07-15 15:36:06.270047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.814 [2024-07-15 15:36:06.270061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.814 qpair failed and we were unable to recover it. 00:30:56.814 [2024-07-15 15:36:06.279841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.814 [2024-07-15 15:36:06.279887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.814 [2024-07-15 15:36:06.279901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.814 [2024-07-15 15:36:06.279908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.814 [2024-07-15 15:36:06.279917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.814 [2024-07-15 15:36:06.279931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.814 qpair failed and we were unable to recover it. 00:30:56.814 [2024-07-15 15:36:06.289768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.814 [2024-07-15 15:36:06.289817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.814 [2024-07-15 15:36:06.289831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.814 [2024-07-15 15:36:06.289838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.814 [2024-07-15 15:36:06.289844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.814 [2024-07-15 15:36:06.289858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.814 qpair failed and we were unable to recover it. 00:30:56.814 [2024-07-15 15:36:06.299920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.814 [2024-07-15 15:36:06.299966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.814 [2024-07-15 15:36:06.299980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.814 [2024-07-15 15:36:06.299987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.814 [2024-07-15 15:36:06.299993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.814 [2024-07-15 15:36:06.300006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.814 qpair failed and we were unable to recover it. 00:30:56.814 [2024-07-15 15:36:06.309936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.814 [2024-07-15 15:36:06.309987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.814 [2024-07-15 15:36:06.310001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.814 [2024-07-15 15:36:06.310008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.814 [2024-07-15 15:36:06.310014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.814 [2024-07-15 15:36:06.310028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.814 qpair failed and we were unable to recover it. 00:30:56.814 [2024-07-15 15:36:06.319969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.814 [2024-07-15 15:36:06.320017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.814 [2024-07-15 15:36:06.320031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.814 [2024-07-15 15:36:06.320038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.814 [2024-07-15 15:36:06.320044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.814 [2024-07-15 15:36:06.320058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.814 qpair failed and we were unable to recover it. 00:30:56.814 [2024-07-15 15:36:06.330004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.814 [2024-07-15 15:36:06.330053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.814 [2024-07-15 15:36:06.330068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.814 [2024-07-15 15:36:06.330074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.814 [2024-07-15 15:36:06.330080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.814 [2024-07-15 15:36:06.330094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.814 qpair failed and we were unable to recover it. 00:30:56.814 [2024-07-15 15:36:06.340028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.814 [2024-07-15 15:36:06.340082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.814 [2024-07-15 15:36:06.340098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.814 [2024-07-15 15:36:06.340105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.814 [2024-07-15 15:36:06.340111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.814 [2024-07-15 15:36:06.340129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.814 qpair failed and we were unable to recover it. 00:30:56.814 [2024-07-15 15:36:06.349925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.814 [2024-07-15 15:36:06.349971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.815 [2024-07-15 15:36:06.349985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.815 [2024-07-15 15:36:06.349992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.815 [2024-07-15 15:36:06.349998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.815 [2024-07-15 15:36:06.350012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.815 qpair failed and we were unable to recover it. 00:30:56.815 [2024-07-15 15:36:06.360072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.815 [2024-07-15 15:36:06.360126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.815 [2024-07-15 15:36:06.360140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.815 [2024-07-15 15:36:06.360147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.815 [2024-07-15 15:36:06.360153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.815 [2024-07-15 15:36:06.360167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.815 qpair failed and we were unable to recover it. 00:30:56.815 [2024-07-15 15:36:06.370101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.815 [2024-07-15 15:36:06.370147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.815 [2024-07-15 15:36:06.370161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.815 [2024-07-15 15:36:06.370171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.815 [2024-07-15 15:36:06.370177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.815 [2024-07-15 15:36:06.370191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.815 qpair failed and we were unable to recover it. 00:30:56.815 [2024-07-15 15:36:06.380115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.815 [2024-07-15 15:36:06.380171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.815 [2024-07-15 15:36:06.380186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.815 [2024-07-15 15:36:06.380193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.815 [2024-07-15 15:36:06.380200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.815 [2024-07-15 15:36:06.380214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.815 qpair failed and we were unable to recover it. 00:30:56.815 [2024-07-15 15:36:06.390153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.815 [2024-07-15 15:36:06.390216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.815 [2024-07-15 15:36:06.390231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.815 [2024-07-15 15:36:06.390237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.815 [2024-07-15 15:36:06.390243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.815 [2024-07-15 15:36:06.390257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.815 qpair failed and we were unable to recover it. 00:30:56.815 [2024-07-15 15:36:06.400179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.815 [2024-07-15 15:36:06.400223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.815 [2024-07-15 15:36:06.400237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.815 [2024-07-15 15:36:06.400244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.815 [2024-07-15 15:36:06.400250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.815 [2024-07-15 15:36:06.400263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.815 qpair failed and we were unable to recover it. 00:30:56.815 [2024-07-15 15:36:06.410228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.815 [2024-07-15 15:36:06.410277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.815 [2024-07-15 15:36:06.410290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.815 [2024-07-15 15:36:06.410297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.815 [2024-07-15 15:36:06.410303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.815 [2024-07-15 15:36:06.410316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.815 qpair failed and we were unable to recover it. 00:30:56.815 [2024-07-15 15:36:06.420274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.815 [2024-07-15 15:36:06.420362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.815 [2024-07-15 15:36:06.420376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.815 [2024-07-15 15:36:06.420383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.815 [2024-07-15 15:36:06.420389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.815 [2024-07-15 15:36:06.420402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.815 qpair failed and we were unable to recover it. 00:30:56.815 [2024-07-15 15:36:06.430248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:56.815 [2024-07-15 15:36:06.430289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:56.815 [2024-07-15 15:36:06.430302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:56.815 [2024-07-15 15:36:06.430309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:56.815 [2024-07-15 15:36:06.430315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:56.815 [2024-07-15 15:36:06.430329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.815 qpair failed and we were unable to recover it. 00:30:57.077 [2024-07-15 15:36:06.440313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.077 [2024-07-15 15:36:06.440405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.077 [2024-07-15 15:36:06.440419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.077 [2024-07-15 15:36:06.440426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.077 [2024-07-15 15:36:06.440432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.077 [2024-07-15 15:36:06.440445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.077 qpair failed and we were unable to recover it. 00:30:57.077 [2024-07-15 15:36:06.450335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.077 [2024-07-15 15:36:06.450381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.078 [2024-07-15 15:36:06.450396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.078 [2024-07-15 15:36:06.450403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.078 [2024-07-15 15:36:06.450409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.078 [2024-07-15 15:36:06.450422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.078 qpair failed and we were unable to recover it. 00:30:57.078 [2024-07-15 15:36:06.460359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.078 [2024-07-15 15:36:06.460413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.078 [2024-07-15 15:36:06.460427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.078 [2024-07-15 15:36:06.460437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.078 [2024-07-15 15:36:06.460443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.078 [2024-07-15 15:36:06.460457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.078 qpair failed and we were unable to recover it. 00:30:57.078 [2024-07-15 15:36:06.470345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.078 [2024-07-15 15:36:06.470394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.078 [2024-07-15 15:36:06.470408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.078 [2024-07-15 15:36:06.470414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.078 [2024-07-15 15:36:06.470420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.078 [2024-07-15 15:36:06.470434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.078 qpair failed and we were unable to recover it. 00:30:57.078 [2024-07-15 15:36:06.480400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.078 [2024-07-15 15:36:06.480460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.078 [2024-07-15 15:36:06.480474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.078 [2024-07-15 15:36:06.480481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.078 [2024-07-15 15:36:06.480487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.078 [2024-07-15 15:36:06.480500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.078 qpair failed and we were unable to recover it. 00:30:57.078 [2024-07-15 15:36:06.490463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.078 [2024-07-15 15:36:06.490556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.078 [2024-07-15 15:36:06.490570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.078 [2024-07-15 15:36:06.490577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.078 [2024-07-15 15:36:06.490583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.078 [2024-07-15 15:36:06.490596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.078 qpair failed and we were unable to recover it. 00:30:57.078 [2024-07-15 15:36:06.500446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.078 [2024-07-15 15:36:06.500503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.078 [2024-07-15 15:36:06.500517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.078 [2024-07-15 15:36:06.500523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.078 [2024-07-15 15:36:06.500529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.078 [2024-07-15 15:36:06.500543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.078 qpair failed and we were unable to recover it. 00:30:57.078 [2024-07-15 15:36:06.510485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.078 [2024-07-15 15:36:06.510531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.078 [2024-07-15 15:36:06.510545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.078 [2024-07-15 15:36:06.510552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.078 [2024-07-15 15:36:06.510558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.078 [2024-07-15 15:36:06.510572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.078 qpair failed and we were unable to recover it. 00:30:57.078 [2024-07-15 15:36:06.520495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.078 [2024-07-15 15:36:06.520540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.078 [2024-07-15 15:36:06.520554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.078 [2024-07-15 15:36:06.520561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.078 [2024-07-15 15:36:06.520566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.078 [2024-07-15 15:36:06.520580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.078 qpair failed and we were unable to recover it. 00:30:57.078 [2024-07-15 15:36:06.530523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.078 [2024-07-15 15:36:06.530570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.078 [2024-07-15 15:36:06.530585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.078 [2024-07-15 15:36:06.530591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.078 [2024-07-15 15:36:06.530597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.078 [2024-07-15 15:36:06.530611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.078 qpair failed and we were unable to recover it. 00:30:57.078 [2024-07-15 15:36:06.540568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.078 [2024-07-15 15:36:06.540627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.078 [2024-07-15 15:36:06.540651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.078 [2024-07-15 15:36:06.540659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.078 [2024-07-15 15:36:06.540666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.078 [2024-07-15 15:36:06.540684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.078 qpair failed and we were unable to recover it. 00:30:57.078 [2024-07-15 15:36:06.550470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.078 [2024-07-15 15:36:06.550520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.078 [2024-07-15 15:36:06.550548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.078 [2024-07-15 15:36:06.550557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.078 [2024-07-15 15:36:06.550564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.078 [2024-07-15 15:36:06.550583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.078 qpair failed and we were unable to recover it. 00:30:57.078 [2024-07-15 15:36:06.560583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.078 [2024-07-15 15:36:06.560629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.078 [2024-07-15 15:36:06.560645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.078 [2024-07-15 15:36:06.560652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.078 [2024-07-15 15:36:06.560658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.078 [2024-07-15 15:36:06.560673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.078 qpair failed and we were unable to recover it. 00:30:57.078 [2024-07-15 15:36:06.570649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.078 [2024-07-15 15:36:06.570701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.078 [2024-07-15 15:36:06.570726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.078 [2024-07-15 15:36:06.570734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.078 [2024-07-15 15:36:06.570741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.078 [2024-07-15 15:36:06.570759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.078 qpair failed and we were unable to recover it. 00:30:57.078 [2024-07-15 15:36:06.580568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.078 [2024-07-15 15:36:06.580616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.078 [2024-07-15 15:36:06.580632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.078 [2024-07-15 15:36:06.580639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.078 [2024-07-15 15:36:06.580645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.078 [2024-07-15 15:36:06.580660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.078 qpair failed and we were unable to recover it. 00:30:57.078 [2024-07-15 15:36:06.590700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.078 [2024-07-15 15:36:06.590750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.078 [2024-07-15 15:36:06.590765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.078 [2024-07-15 15:36:06.590772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.079 [2024-07-15 15:36:06.590778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.079 [2024-07-15 15:36:06.590796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.079 qpair failed and we were unable to recover it. 00:30:57.079 [2024-07-15 15:36:06.600715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.079 [2024-07-15 15:36:06.600758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.079 [2024-07-15 15:36:06.600773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.079 [2024-07-15 15:36:06.600780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.079 [2024-07-15 15:36:06.600786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.079 [2024-07-15 15:36:06.600800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.079 qpair failed and we were unable to recover it. 00:30:57.079 [2024-07-15 15:36:06.610752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.079 [2024-07-15 15:36:06.610799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.079 [2024-07-15 15:36:06.610813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.079 [2024-07-15 15:36:06.610820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.079 [2024-07-15 15:36:06.610826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.079 [2024-07-15 15:36:06.610840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.079 qpair failed and we were unable to recover it. 00:30:57.079 [2024-07-15 15:36:06.620768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.079 [2024-07-15 15:36:06.620823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.079 [2024-07-15 15:36:06.620837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.079 [2024-07-15 15:36:06.620844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.079 [2024-07-15 15:36:06.620850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.079 [2024-07-15 15:36:06.620864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.079 qpair failed and we were unable to recover it. 00:30:57.079 [2024-07-15 15:36:06.630664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.079 [2024-07-15 15:36:06.630710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.079 [2024-07-15 15:36:06.630724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.079 [2024-07-15 15:36:06.630731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.079 [2024-07-15 15:36:06.630737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.079 [2024-07-15 15:36:06.630751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.079 qpair failed and we were unable to recover it. 00:30:57.079 [2024-07-15 15:36:06.640699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.079 [2024-07-15 15:36:06.640745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.079 [2024-07-15 15:36:06.640762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.079 [2024-07-15 15:36:06.640769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.079 [2024-07-15 15:36:06.640775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.079 [2024-07-15 15:36:06.640789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.079 qpair failed and we were unable to recover it. 00:30:57.079 [2024-07-15 15:36:06.650857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.079 [2024-07-15 15:36:06.650908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.079 [2024-07-15 15:36:06.650923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.079 [2024-07-15 15:36:06.650930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.079 [2024-07-15 15:36:06.650935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.079 [2024-07-15 15:36:06.650950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.079 qpair failed and we were unable to recover it. 00:30:57.079 [2024-07-15 15:36:06.660835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.079 [2024-07-15 15:36:06.660889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.079 [2024-07-15 15:36:06.660903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.079 [2024-07-15 15:36:06.660910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.079 [2024-07-15 15:36:06.660916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.079 [2024-07-15 15:36:06.660930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.079 qpair failed and we were unable to recover it. 00:30:57.079 [2024-07-15 15:36:06.670894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.079 [2024-07-15 15:36:06.670942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.079 [2024-07-15 15:36:06.670956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.079 [2024-07-15 15:36:06.670962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.079 [2024-07-15 15:36:06.670968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.079 [2024-07-15 15:36:06.670982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.079 qpair failed and we were unable to recover it. 00:30:57.079 [2024-07-15 15:36:06.680831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.079 [2024-07-15 15:36:06.680877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.079 [2024-07-15 15:36:06.680893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.079 [2024-07-15 15:36:06.680900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.079 [2024-07-15 15:36:06.680910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.079 [2024-07-15 15:36:06.680924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.079 qpair failed and we were unable to recover it. 00:30:57.079 [2024-07-15 15:36:06.690949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.079 [2024-07-15 15:36:06.691001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.079 [2024-07-15 15:36:06.691015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.079 [2024-07-15 15:36:06.691022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.079 [2024-07-15 15:36:06.691028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.079 [2024-07-15 15:36:06.691041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.079 qpair failed and we were unable to recover it. 00:30:57.341 [2024-07-15 15:36:06.700987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.341 [2024-07-15 15:36:06.701041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.341 [2024-07-15 15:36:06.701055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.341 [2024-07-15 15:36:06.701062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.341 [2024-07-15 15:36:06.701068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.341 [2024-07-15 15:36:06.701082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.341 qpair failed and we were unable to recover it. 00:30:57.341 [2024-07-15 15:36:06.711012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.341 [2024-07-15 15:36:06.711058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.341 [2024-07-15 15:36:06.711073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.341 [2024-07-15 15:36:06.711079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.341 [2024-07-15 15:36:06.711085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.341 [2024-07-15 15:36:06.711099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.341 qpair failed and we were unable to recover it. 00:30:57.341 [2024-07-15 15:36:06.721033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.341 [2024-07-15 15:36:06.721081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.341 [2024-07-15 15:36:06.721095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.341 [2024-07-15 15:36:06.721102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.341 [2024-07-15 15:36:06.721108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.341 [2024-07-15 15:36:06.721122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.341 qpair failed and we were unable to recover it. 00:30:57.341 [2024-07-15 15:36:06.731073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.341 [2024-07-15 15:36:06.731125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.341 [2024-07-15 15:36:06.731139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.341 [2024-07-15 15:36:06.731145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.341 [2024-07-15 15:36:06.731151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.341 [2024-07-15 15:36:06.731165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.341 qpair failed and we were unable to recover it. 00:30:57.341 [2024-07-15 15:36:06.741088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.341 [2024-07-15 15:36:06.741144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.341 [2024-07-15 15:36:06.741158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.341 [2024-07-15 15:36:06.741165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.341 [2024-07-15 15:36:06.741171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.341 [2024-07-15 15:36:06.741184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.341 qpair failed and we were unable to recover it. 00:30:57.341 [2024-07-15 15:36:06.750998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.342 [2024-07-15 15:36:06.751045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.342 [2024-07-15 15:36:06.751060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.342 [2024-07-15 15:36:06.751066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.342 [2024-07-15 15:36:06.751072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.342 [2024-07-15 15:36:06.751086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.342 qpair failed and we were unable to recover it. 00:30:57.342 [2024-07-15 15:36:06.761174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.342 [2024-07-15 15:36:06.761220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.342 [2024-07-15 15:36:06.761234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.342 [2024-07-15 15:36:06.761241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.342 [2024-07-15 15:36:06.761247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.342 [2024-07-15 15:36:06.761260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.342 qpair failed and we were unable to recover it. 00:30:57.342 [2024-07-15 15:36:06.771158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.342 [2024-07-15 15:36:06.771242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.342 [2024-07-15 15:36:06.771256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.342 [2024-07-15 15:36:06.771263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.342 [2024-07-15 15:36:06.771272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.342 [2024-07-15 15:36:06.771286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.342 qpair failed and we were unable to recover it. 00:30:57.342 [2024-07-15 15:36:06.781212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.342 [2024-07-15 15:36:06.781266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.342 [2024-07-15 15:36:06.781280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.342 [2024-07-15 15:36:06.781287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.342 [2024-07-15 15:36:06.781293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.342 [2024-07-15 15:36:06.781306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.342 qpair failed and we were unable to recover it. 00:30:57.342 [2024-07-15 15:36:06.791226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.342 [2024-07-15 15:36:06.791270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.342 [2024-07-15 15:36:06.791284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.342 [2024-07-15 15:36:06.791291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.342 [2024-07-15 15:36:06.791297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.342 [2024-07-15 15:36:06.791311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.342 qpair failed and we were unable to recover it. 00:30:57.342 [2024-07-15 15:36:06.801246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.342 [2024-07-15 15:36:06.801293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.342 [2024-07-15 15:36:06.801309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.342 [2024-07-15 15:36:06.801316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.342 [2024-07-15 15:36:06.801323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.342 [2024-07-15 15:36:06.801340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.342 qpair failed and we were unable to recover it. 00:30:57.342 [2024-07-15 15:36:06.811162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.342 [2024-07-15 15:36:06.811208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.342 [2024-07-15 15:36:06.811223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.342 [2024-07-15 15:36:06.811230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.342 [2024-07-15 15:36:06.811236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.342 [2024-07-15 15:36:06.811250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.342 qpair failed and we were unable to recover it. 00:30:57.342 [2024-07-15 15:36:06.821312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.342 [2024-07-15 15:36:06.821367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.342 [2024-07-15 15:36:06.821382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.342 [2024-07-15 15:36:06.821389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.342 [2024-07-15 15:36:06.821396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.342 [2024-07-15 15:36:06.821410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.342 qpair failed and we were unable to recover it. 00:30:57.342 [2024-07-15 15:36:06.831342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.342 [2024-07-15 15:36:06.831393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.342 [2024-07-15 15:36:06.831408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.342 [2024-07-15 15:36:06.831415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.342 [2024-07-15 15:36:06.831421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.342 [2024-07-15 15:36:06.831436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.342 qpair failed and we were unable to recover it. 00:30:57.342 [2024-07-15 15:36:06.841353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.342 [2024-07-15 15:36:06.841406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.342 [2024-07-15 15:36:06.841421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.342 [2024-07-15 15:36:06.841427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.342 [2024-07-15 15:36:06.841433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.342 [2024-07-15 15:36:06.841447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.342 qpair failed and we were unable to recover it. 00:30:57.342 [2024-07-15 15:36:06.851388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.342 [2024-07-15 15:36:06.851437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.342 [2024-07-15 15:36:06.851451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.342 [2024-07-15 15:36:06.851458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.342 [2024-07-15 15:36:06.851464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.342 [2024-07-15 15:36:06.851477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.342 qpair failed and we were unable to recover it. 00:30:57.342 [2024-07-15 15:36:06.861402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.342 [2024-07-15 15:36:06.861450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.342 [2024-07-15 15:36:06.861464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.342 [2024-07-15 15:36:06.861474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.342 [2024-07-15 15:36:06.861480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.342 [2024-07-15 15:36:06.861494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.342 qpair failed and we were unable to recover it. 00:30:57.342 [2024-07-15 15:36:06.871330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.342 [2024-07-15 15:36:06.871376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.342 [2024-07-15 15:36:06.871390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.342 [2024-07-15 15:36:06.871397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.342 [2024-07-15 15:36:06.871403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.342 [2024-07-15 15:36:06.871416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.342 qpair failed and we were unable to recover it. 00:30:57.342 [2024-07-15 15:36:06.881452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.342 [2024-07-15 15:36:06.881501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.342 [2024-07-15 15:36:06.881516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.342 [2024-07-15 15:36:06.881523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.342 [2024-07-15 15:36:06.881529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.342 [2024-07-15 15:36:06.881542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.342 qpair failed and we were unable to recover it. 00:30:57.342 [2024-07-15 15:36:06.891504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.342 [2024-07-15 15:36:06.891553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.342 [2024-07-15 15:36:06.891568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.343 [2024-07-15 15:36:06.891574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.343 [2024-07-15 15:36:06.891580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.343 [2024-07-15 15:36:06.891594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.343 qpair failed and we were unable to recover it. 00:30:57.343 [2024-07-15 15:36:06.901530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.343 [2024-07-15 15:36:06.901583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.343 [2024-07-15 15:36:06.901598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.343 [2024-07-15 15:36:06.901604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.343 [2024-07-15 15:36:06.901610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.343 [2024-07-15 15:36:06.901624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.343 qpair failed and we were unable to recover it. 00:30:57.343 [2024-07-15 15:36:06.911437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.343 [2024-07-15 15:36:06.911484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.343 [2024-07-15 15:36:06.911498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.343 [2024-07-15 15:36:06.911505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.343 [2024-07-15 15:36:06.911511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.343 [2024-07-15 15:36:06.911524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.343 qpair failed and we were unable to recover it. 00:30:57.343 [2024-07-15 15:36:06.921561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.343 [2024-07-15 15:36:06.921612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.343 [2024-07-15 15:36:06.921627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.343 [2024-07-15 15:36:06.921634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.343 [2024-07-15 15:36:06.921640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2728000b90 00:30:57.343 [2024-07-15 15:36:06.921655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.343 qpair failed and we were unable to recover it. 00:30:57.343 [2024-07-15 15:36:06.931620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.343 [2024-07-15 15:36:06.931677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.343 [2024-07-15 15:36:06.931702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.343 [2024-07-15 15:36:06.931711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.343 [2024-07-15 15:36:06.931719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x227acf0 00:30:57.343 [2024-07-15 15:36:06.931738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.343 qpair failed and we were unable to recover it. 00:30:57.343 [2024-07-15 15:36:06.941637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.343 [2024-07-15 15:36:06.941692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.343 [2024-07-15 15:36:06.941717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.343 [2024-07-15 15:36:06.941725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.343 [2024-07-15 15:36:06.941732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x227acf0 00:30:57.343 [2024-07-15 15:36:06.941751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.343 qpair failed and we were unable to recover it. 00:30:57.343 [2024-07-15 15:36:06.951676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.343 [2024-07-15 15:36:06.951790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.343 [2024-07-15 15:36:06.951863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.343 [2024-07-15 15:36:06.951901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.343 [2024-07-15 15:36:06.951922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2718000b90 00:30:57.343 [2024-07-15 15:36:06.951975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:57.343 qpair failed and we were unable to recover it. 00:30:57.603 [2024-07-15 15:36:06.961687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.603 [2024-07-15 15:36:06.961781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.603 [2024-07-15 15:36:06.961822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.603 [2024-07-15 15:36:06.961843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.603 [2024-07-15 15:36:06.961861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2718000b90 00:30:57.603 [2024-07-15 15:36:06.961913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:57.603 qpair failed and we were unable to recover it. 00:30:57.603 [2024-07-15 15:36:06.971732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.603 [2024-07-15 15:36:06.971776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.603 [2024-07-15 15:36:06.971795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.603 [2024-07-15 15:36:06.971801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.603 [2024-07-15 15:36:06.971805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2720000b90 00:30:57.603 [2024-07-15 15:36:06.971818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:57.603 qpair failed and we were unable to recover it. 00:30:57.603 [2024-07-15 15:36:06.981745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.603 [2024-07-15 15:36:06.981795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.603 [2024-07-15 15:36:06.981807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.603 [2024-07-15 15:36:06.981812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.603 [2024-07-15 15:36:06.981817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2720000b90 00:30:57.603 [2024-07-15 15:36:06.981828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:57.603 qpair failed and we were unable to recover it. 00:30:57.603 [2024-07-15 15:36:06.981969] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:57.603 A controller has encountered a failure and is being reset. 00:30:57.603 [2024-07-15 15:36:06.982071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2270a60 (9): Bad file descriptor 00:30:57.603 Controller properly reset. 00:30:57.603 Initializing NVMe Controllers 00:30:57.603 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:57.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:57.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:57.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:57.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:57.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:57.603 Initialization complete. Launching workers. 00:30:57.603 Starting thread on core 1 00:30:57.603 Starting thread on core 2 00:30:57.603 Starting thread on core 3 00:30:57.603 Starting thread on core 0 00:30:57.603 15:36:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:57.603 00:30:57.603 real 0m11.433s 00:30:57.603 user 0m21.590s 00:30:57.603 sys 0m3.567s 00:30:57.603 15:36:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:57.603 15:36:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:57.603 ************************************ 00:30:57.603 END TEST nvmf_target_disconnect_tc2 00:30:57.603 ************************************ 00:30:57.603 15:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:57.603 15:36:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:57.603 15:36:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:57.603 15:36:07 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:57.603 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:57.603 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:57.603 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:57.603 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:57.603 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:57.603 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:57.603 rmmod nvme_tcp 00:30:57.603 rmmod nvme_fabrics 00:30:57.862 rmmod nvme_keyring 00:30:57.862 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:57.862 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:57.862 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:57.862 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 902238 ']' 00:30:57.862 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 902238 00:30:57.862 15:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 902238 ']' 00:30:57.862 15:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 902238 00:30:57.862 15:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:30:57.862 15:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:57.862 15:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 902238 00:30:57.863 15:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:30:57.863 15:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:30:57.863 15:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 902238' 00:30:57.863 killing process with pid 902238 00:30:57.863 15:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 902238 00:30:57.863 15:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 902238 00:30:57.863 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:57.863 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:57.863 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:57.863 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:57.863 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:57.863 15:36:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.863 15:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:57.863 15:36:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.407 15:36:09 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:00.407 00:31:00.407 real 0m21.736s 00:31:00.407 user 0m49.777s 00:31:00.407 sys 0m9.497s 00:31:00.407 15:36:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:00.407 15:36:09 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:00.407 ************************************ 00:31:00.407 END TEST nvmf_target_disconnect 00:31:00.407 ************************************ 00:31:00.407 15:36:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:00.407 15:36:09 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:31:00.407 15:36:09 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:00.407 15:36:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:00.407 15:36:09 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:31:00.408 00:31:00.408 real 23m3.089s 00:31:00.408 user 47m27.661s 00:31:00.408 sys 7m13.318s 00:31:00.408 15:36:09 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:00.408 15:36:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:00.408 ************************************ 00:31:00.408 END TEST nvmf_tcp 00:31:00.408 ************************************ 00:31:00.408 15:36:09 -- common/autotest_common.sh@1142 -- # return 0 00:31:00.408 15:36:09 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:31:00.408 15:36:09 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:00.408 15:36:09 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:00.408 15:36:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:00.408 15:36:09 -- common/autotest_common.sh@10 -- # set +x 00:31:00.408 ************************************ 00:31:00.408 START TEST spdkcli_nvmf_tcp 00:31:00.408 ************************************ 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:00.408 * Looking for test storage... 00:31:00.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=904243 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 904243 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 904243 ']' 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:00.408 15:36:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:00.408 [2024-07-15 15:36:09.861503] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:31:00.408 [2024-07-15 15:36:09.861582] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904243 ] 00:31:00.408 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.408 [2024-07-15 15:36:09.933135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:00.408 [2024-07-15 15:36:10.009163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.408 [2024-07-15 15:36:10.009168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.352 15:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:01.352 15:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:31:01.352 15:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:01.352 15:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:01.352 15:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:01.352 15:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:01.352 15:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:01.352 15:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:01.352 15:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:01.352 15:36:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:01.352 15:36:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:01.352 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:01.352 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:01.353 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:01.353 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:01.353 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:01.353 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:01.353 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:01.353 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:01.353 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:01.353 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:01.353 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:01.353 ' 00:31:03.902 [2024-07-15 15:36:12.979816] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:04.844 [2024-07-15 15:36:14.143644] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:06.754 [2024-07-15 15:36:16.277891] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:08.665 [2024-07-15 15:36:18.115446] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:10.048 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:10.048 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:10.048 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:10.048 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:10.048 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:10.048 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:10.048 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:10.048 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:10.048 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:10.048 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:10.048 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:10.048 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:10.048 15:36:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:10.048 15:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:10.048 15:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:10.308 15:36:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:10.308 15:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:10.308 15:36:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:10.308 15:36:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:10.308 15:36:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:10.569 15:36:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:10.569 15:36:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:10.569 15:36:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:10.569 15:36:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:10.569 15:36:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:10.569 15:36:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:10.569 15:36:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:10.569 15:36:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:10.569 15:36:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:10.569 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:10.569 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:10.569 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:10.569 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:10.569 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:10.569 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:10.569 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:10.569 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:10.569 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:10.569 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:10.569 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:10.569 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:10.569 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:10.569 ' 00:31:15.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:15.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:15.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:15.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:15.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:15.854 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:15.854 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:15.854 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:15.854 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:15.854 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:15.854 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:15.854 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:15.854 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:15.854 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:15.854 15:36:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:15.854 15:36:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:15.854 15:36:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:15.854 15:36:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 904243 00:31:15.854 15:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 904243 ']' 00:31:15.854 15:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 904243 00:31:15.854 15:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:31:15.854 15:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:15.854 15:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 904243 00:31:15.854 15:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:15.854 15:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:15.855 15:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 904243' 00:31:15.855 killing process with pid 904243 00:31:15.855 15:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 904243 00:31:15.855 15:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 904243 00:31:15.855 15:36:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:15.855 15:36:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:15.855 15:36:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 904243 ']' 00:31:15.855 15:36:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 904243 00:31:15.855 15:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 904243 ']' 00:31:15.855 15:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 904243 00:31:15.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (904243) - No such process 00:31:15.855 15:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 904243 is not found' 00:31:15.855 Process with pid 904243 is not found 00:31:15.855 15:36:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:15.855 15:36:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:15.855 15:36:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:15.855 00:31:15.855 real 0m15.521s 00:31:15.855 user 0m31.982s 00:31:15.855 sys 0m0.705s 00:31:15.855 15:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:15.855 15:36:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:15.855 ************************************ 00:31:15.855 END TEST spdkcli_nvmf_tcp 00:31:15.855 ************************************ 00:31:15.855 15:36:25 -- common/autotest_common.sh@1142 -- # return 0 00:31:15.855 15:36:25 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:15.855 15:36:25 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:15.855 15:36:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:15.855 15:36:25 -- common/autotest_common.sh@10 -- # set +x 00:31:15.855 ************************************ 00:31:15.855 START TEST nvmf_identify_passthru 00:31:15.855 ************************************ 00:31:15.855 15:36:25 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:15.855 * Looking for test storage... 00:31:15.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:15.855 15:36:25 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.855 15:36:25 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.855 15:36:25 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.855 15:36:25 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.855 15:36:25 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.855 15:36:25 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.855 15:36:25 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.855 15:36:25 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:15.855 15:36:25 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:15.855 15:36:25 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.855 15:36:25 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.855 15:36:25 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.855 15:36:25 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.855 15:36:25 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.855 15:36:25 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.855 15:36:25 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.855 15:36:25 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:15.855 15:36:25 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.855 15:36:25 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.855 15:36:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:15.855 15:36:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:15.855 15:36:25 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:31:15.855 15:36:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:24.014 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:24.014 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:24.014 Found net devices under 0000:31:00.0: cvl_0_0 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:24.014 Found net devices under 0000:31:00.1: cvl_0_1 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:24.014 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:24.015 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:24.015 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:24.015 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:24.015 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:24.015 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:24.015 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:24.015 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:24.015 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:24.015 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:24.015 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:24.015 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:24.015 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:24.015 15:36:32 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:24.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:24.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:31:24.015 00:31:24.015 --- 10.0.0.2 ping statistics --- 00:31:24.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.015 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:24.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:24.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:31:24.015 00:31:24.015 --- 10.0.0.1 ping statistics --- 00:31:24.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.015 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:24.015 15:36:33 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:24.015 15:36:33 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:24.015 15:36:33 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:24.015 15:36:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:24.015 15:36:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:24.015 15:36:33 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:31:24.015 15:36:33 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:31:24.015 15:36:33 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:31:24.015 15:36:33 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:31:24.015 15:36:33 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:31:24.015 15:36:33 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:31:24.015 15:36:33 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:24.015 15:36:33 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:24.015 15:36:33 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:31:24.015 15:36:33 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:31:24.015 15:36:33 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:31:24.015 15:36:33 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:31:24.015 15:36:33 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:31:24.015 15:36:33 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:31:24.015 15:36:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:24.015 15:36:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:24.015 15:36:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:24.015 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.275 15:36:33 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605500 00:31:24.275 15:36:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:24.275 15:36:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:24.275 15:36:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:24.275 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.844 15:36:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:31:24.844 15:36:34 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:24.844 15:36:34 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:24.844 15:36:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:24.844 15:36:34 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:24.844 15:36:34 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:24.844 15:36:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:24.844 15:36:34 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=911853 00:31:24.844 15:36:34 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:24.844 15:36:34 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:24.844 15:36:34 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 911853 00:31:24.844 15:36:34 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 911853 ']' 00:31:24.844 15:36:34 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.844 15:36:34 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:24.844 15:36:34 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.844 15:36:34 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:24.844 15:36:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:24.844 [2024-07-15 15:36:34.308047] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:31:24.844 [2024-07-15 15:36:34.308100] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:24.844 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.844 [2024-07-15 15:36:34.379698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:24.844 [2024-07-15 15:36:34.449781] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:24.844 [2024-07-15 15:36:34.449819] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:24.844 [2024-07-15 15:36:34.449827] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:24.844 [2024-07-15 15:36:34.449834] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:24.844 [2024-07-15 15:36:34.449839] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:24.844 [2024-07-15 15:36:34.449898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.844 [2024-07-15 15:36:34.450045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:24.844 [2024-07-15 15:36:34.450274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:24.844 [2024-07-15 15:36:34.450275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.826 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:25.826 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:31:25.826 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:25.826 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.826 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:25.826 INFO: Log level set to 20 00:31:25.826 INFO: Requests: 00:31:25.826 { 00:31:25.826 "jsonrpc": "2.0", 00:31:25.826 "method": "nvmf_set_config", 00:31:25.826 "id": 1, 00:31:25.826 "params": { 00:31:25.826 "admin_cmd_passthru": { 00:31:25.826 "identify_ctrlr": true 00:31:25.826 } 00:31:25.826 } 00:31:25.826 } 00:31:25.826 00:31:25.826 INFO: response: 00:31:25.826 { 00:31:25.826 "jsonrpc": "2.0", 00:31:25.826 "id": 1, 00:31:25.826 "result": true 00:31:25.826 } 00:31:25.826 00:31:25.826 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.826 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:25.826 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.826 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:25.826 INFO: Setting log level to 20 00:31:25.826 INFO: Setting log level to 20 00:31:25.826 INFO: Log level set to 20 00:31:25.826 INFO: Log level set to 20 00:31:25.826 INFO: Requests: 00:31:25.826 { 00:31:25.826 "jsonrpc": "2.0", 00:31:25.826 "method": "framework_start_init", 00:31:25.826 "id": 1 00:31:25.826 } 00:31:25.826 00:31:25.826 INFO: Requests: 00:31:25.826 { 00:31:25.826 "jsonrpc": "2.0", 00:31:25.826 "method": "framework_start_init", 00:31:25.826 "id": 1 00:31:25.826 } 00:31:25.826 00:31:25.826 [2024-07-15 15:36:35.172304] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:25.826 INFO: response: 00:31:25.826 { 00:31:25.826 "jsonrpc": "2.0", 00:31:25.826 "id": 1, 00:31:25.826 "result": true 00:31:25.826 } 00:31:25.826 00:31:25.826 INFO: response: 00:31:25.826 { 00:31:25.826 "jsonrpc": "2.0", 00:31:25.826 "id": 1, 00:31:25.826 "result": true 00:31:25.826 } 00:31:25.826 00:31:25.826 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.826 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:25.826 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.826 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:25.826 INFO: Setting log level to 40 00:31:25.826 INFO: Setting log level to 40 00:31:25.826 INFO: Setting log level to 40 00:31:25.826 [2024-07-15 15:36:35.185627] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:25.826 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.826 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:25.826 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:25.826 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:25.826 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:31:25.826 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.826 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:26.097 Nvme0n1 00:31:26.097 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.097 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:26.097 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.097 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:26.097 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.097 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:26.097 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.097 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:26.097 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.097 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:26.097 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.097 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:26.097 [2024-07-15 15:36:35.573245] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.097 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.097 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:26.097 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.097 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:26.097 [ 00:31:26.097 { 00:31:26.097 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:26.097 "subtype": "Discovery", 00:31:26.097 "listen_addresses": [], 00:31:26.097 "allow_any_host": true, 00:31:26.097 "hosts": [] 00:31:26.097 }, 00:31:26.097 { 00:31:26.097 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:26.097 "subtype": "NVMe", 00:31:26.097 "listen_addresses": [ 00:31:26.097 { 00:31:26.097 "trtype": "TCP", 00:31:26.097 "adrfam": "IPv4", 00:31:26.097 "traddr": "10.0.0.2", 00:31:26.097 "trsvcid": "4420" 00:31:26.097 } 00:31:26.097 ], 00:31:26.097 "allow_any_host": true, 00:31:26.097 "hosts": [], 00:31:26.097 "serial_number": "SPDK00000000000001", 00:31:26.097 "model_number": "SPDK bdev Controller", 00:31:26.097 "max_namespaces": 1, 00:31:26.097 "min_cntlid": 1, 00:31:26.097 "max_cntlid": 65519, 00:31:26.097 "namespaces": [ 00:31:26.097 { 00:31:26.097 "nsid": 1, 00:31:26.097 "bdev_name": "Nvme0n1", 00:31:26.097 "name": "Nvme0n1", 00:31:26.097 "nguid": "36344730526055000025384500000031", 00:31:26.097 "uuid": "36344730-5260-5500-0025-384500000031" 00:31:26.097 } 00:31:26.097 ] 00:31:26.097 } 00:31:26.097 ] 00:31:26.097 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.097 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:26.097 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:26.097 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:26.097 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.097 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605500 00:31:26.358 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:26.358 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:26.358 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:26.358 EAL: No free 2048 kB hugepages reported on node 1 00:31:26.358 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:31:26.358 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605500 '!=' S64GNE0R605500 ']' 00:31:26.358 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:31:26.358 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:26.358 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.358 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:26.358 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.358 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:26.358 15:36:35 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:26.358 15:36:35 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:26.358 15:36:35 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:31:26.358 15:36:35 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:26.358 15:36:35 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:31:26.358 15:36:35 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:26.358 15:36:35 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:26.358 rmmod nvme_tcp 00:31:26.358 rmmod nvme_fabrics 00:31:26.358 rmmod nvme_keyring 00:31:26.358 15:36:35 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:26.358 15:36:35 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:31:26.358 15:36:35 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:31:26.358 15:36:35 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 911853 ']' 00:31:26.358 15:36:35 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 911853 00:31:26.358 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 911853 ']' 00:31:26.358 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 911853 00:31:26.358 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:31:26.358 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:26.358 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 911853 00:31:26.618 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:26.618 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:26.618 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 911853' 00:31:26.618 killing process with pid 911853 00:31:26.618 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 911853 00:31:26.618 15:36:35 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 911853 00:31:26.879 15:36:36 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:26.879 15:36:36 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:26.879 15:36:36 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:26.879 15:36:36 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:26.879 15:36:36 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:26.879 15:36:36 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.879 15:36:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:26.879 15:36:36 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.800 15:36:38 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:28.800 00:31:28.800 real 0m13.026s 00:31:28.800 user 0m9.754s 00:31:28.800 sys 0m6.357s 00:31:28.800 15:36:38 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:28.800 15:36:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:28.800 ************************************ 00:31:28.800 END TEST nvmf_identify_passthru 00:31:28.800 ************************************ 00:31:28.800 15:36:38 -- common/autotest_common.sh@1142 -- # return 0 00:31:28.800 15:36:38 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:28.800 15:36:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:28.800 15:36:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:28.800 15:36:38 -- common/autotest_common.sh@10 -- # set +x 00:31:28.800 ************************************ 00:31:28.800 START TEST nvmf_dif 00:31:28.800 ************************************ 00:31:28.800 15:36:38 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:29.061 * Looking for test storage... 00:31:29.061 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:29.061 15:36:38 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:29.061 15:36:38 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:29.061 15:36:38 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.061 15:36:38 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.061 15:36:38 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.061 15:36:38 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.061 15:36:38 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.061 15:36:38 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:29.061 15:36:38 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:29.061 15:36:38 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:29.061 15:36:38 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:29.061 15:36:38 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:29.061 15:36:38 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:29.061 15:36:38 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.061 15:36:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:29.061 15:36:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:29.061 15:36:38 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:31:29.061 15:36:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:37.199 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:37.199 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:37.199 Found net devices under 0000:31:00.0: cvl_0_0 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:37.199 Found net devices under 0000:31:00.1: cvl_0_1 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:37.199 15:36:45 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:37.200 15:36:45 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:37.200 15:36:45 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:37.200 15:36:45 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:37.200 15:36:45 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:37.200 15:36:45 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:37.200 15:36:45 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:37.200 15:36:45 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:37.200 15:36:45 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:37.200 15:36:45 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:37.200 15:36:45 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:37.200 15:36:45 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:37.200 15:36:45 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:37.200 15:36:46 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:37.200 15:36:46 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:37.200 15:36:46 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:37.200 15:36:46 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:37.200 15:36:46 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:37.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:37.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:31:37.200 00:31:37.200 --- 10.0.0.2 ping statistics --- 00:31:37.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.200 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:31:37.200 15:36:46 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:37.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:37.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:31:37.200 00:31:37.200 --- 10.0.0.1 ping statistics --- 00:31:37.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:37.200 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:31:37.200 15:36:46 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:37.200 15:36:46 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:31:37.200 15:36:46 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:37.200 15:36:46 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:39.746 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:39.746 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:39.746 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:39.746 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:39.746 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:39.746 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:39.746 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:39.746 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:39.746 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:39.746 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:31:39.746 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:39.746 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:39.746 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:39.746 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:39.746 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:39.746 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:39.746 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:39.746 15:36:49 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:39.746 15:36:49 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:39.746 15:36:49 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:39.746 15:36:49 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:39.746 15:36:49 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:39.746 15:36:49 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:39.746 15:36:49 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:39.746 15:36:49 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:39.746 15:36:49 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:39.746 15:36:49 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:39.746 15:36:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:39.746 15:36:49 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=918041 00:31:39.746 15:36:49 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 918041 00:31:39.746 15:36:49 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:39.746 15:36:49 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 918041 ']' 00:31:39.746 15:36:49 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.746 15:36:49 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:39.746 15:36:49 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.746 15:36:49 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:39.746 15:36:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:39.746 [2024-07-15 15:36:49.205109] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:31:39.746 [2024-07-15 15:36:49.205156] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:39.746 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.746 [2024-07-15 15:36:49.273330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.746 [2024-07-15 15:36:49.337011] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:39.746 [2024-07-15 15:36:49.337045] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:39.746 [2024-07-15 15:36:49.337052] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:39.746 [2024-07-15 15:36:49.337058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:39.746 [2024-07-15 15:36:49.337063] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:39.746 [2024-07-15 15:36:49.337082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.689 15:36:49 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:40.689 15:36:49 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:31:40.689 15:36:49 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:40.689 15:36:49 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:40.689 15:36:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:40.689 15:36:50 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.689 15:36:50 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:40.689 15:36:50 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:40.689 15:36:50 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.689 15:36:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:40.689 [2024-07-15 15:36:50.047739] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.689 15:36:50 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.689 15:36:50 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:40.689 15:36:50 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:40.689 15:36:50 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:40.689 15:36:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:40.689 ************************************ 00:31:40.689 START TEST fio_dif_1_default 00:31:40.689 ************************************ 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:40.689 bdev_null0 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:40.689 [2024-07-15 15:36:50.136079] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:40.689 { 00:31:40.689 "params": { 00:31:40.689 "name": "Nvme$subsystem", 00:31:40.689 "trtype": "$TEST_TRANSPORT", 00:31:40.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.689 "adrfam": "ipv4", 00:31:40.689 "trsvcid": "$NVMF_PORT", 00:31:40.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.689 "hdgst": ${hdgst:-false}, 00:31:40.689 "ddgst": ${ddgst:-false} 00:31:40.689 }, 00:31:40.689 "method": "bdev_nvme_attach_controller" 00:31:40.689 } 00:31:40.689 EOF 00:31:40.689 )") 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:31:40.689 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:40.690 "params": { 00:31:40.690 "name": "Nvme0", 00:31:40.690 "trtype": "tcp", 00:31:40.690 "traddr": "10.0.0.2", 00:31:40.690 "adrfam": "ipv4", 00:31:40.690 "trsvcid": "4420", 00:31:40.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:40.690 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:40.690 "hdgst": false, 00:31:40.690 "ddgst": false 00:31:40.690 }, 00:31:40.690 "method": "bdev_nvme_attach_controller" 00:31:40.690 }' 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:40.690 15:36:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:41.259 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:41.259 fio-3.35 00:31:41.259 Starting 1 thread 00:31:41.259 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.492 00:31:53.492 filename0: (groupid=0, jobs=1): err= 0: pid=918573: Mon Jul 15 15:37:01 2024 00:31:53.492 read: IOPS=188, BW=755KiB/s (773kB/s)(7568KiB/10019msec) 00:31:53.492 slat (nsec): min=4188, max=23812, avg=8030.80, stdev=655.86 00:31:53.492 clat (usec): min=579, max=42072, avg=21158.20, stdev=20258.14 00:31:53.492 lat (usec): min=587, max=42080, avg=21166.23, stdev=20258.11 00:31:53.492 clat percentiles (usec): 00:31:53.492 | 1.00th=[ 725], 5.00th=[ 898], 10.00th=[ 906], 20.00th=[ 922], 00:31:53.492 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 7570], 60.00th=[41157], 00:31:53.492 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:31:53.492 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:53.492 | 99.99th=[42206] 00:31:53.492 bw ( KiB/s): min= 704, max= 768, per=99.95%, avg=755.20, stdev=26.27, samples=20 00:31:53.492 iops : min= 176, max= 192, avg=188.80, stdev= 6.57, samples=20 00:31:53.492 lat (usec) : 750=1.48%, 1000=48.20% 00:31:53.492 lat (msec) : 2=0.21%, 10=0.21%, 50=49.89% 00:31:53.492 cpu : usr=94.79%, sys=4.96%, ctx=15, majf=0, minf=240 00:31:53.492 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:53.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.492 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.492 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:53.492 00:31:53.492 Run status group 0 (all jobs): 00:31:53.492 READ: bw=755KiB/s (773kB/s), 755KiB/s-755KiB/s (773kB/s-773kB/s), io=7568KiB (7750kB), run=10019-10019msec 00:31:53.492 15:37:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:53.492 15:37:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:53.492 15:37:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:53.492 15:37:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:53.492 15:37:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:53.492 15:37:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:53.492 15:37:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.492 15:37:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:53.492 15:37:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.492 15:37:01 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:53.492 15:37:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.492 15:37:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:53.492 15:37:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.492 00:31:53.492 real 0m11.108s 00:31:53.492 user 0m26.120s 00:31:53.492 sys 0m0.796s 00:31:53.492 15:37:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:53.492 15:37:01 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:53.492 ************************************ 00:31:53.492 END TEST fio_dif_1_default 00:31:53.492 ************************************ 00:31:53.492 15:37:01 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:53.492 15:37:01 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:53.492 15:37:01 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:53.493 15:37:01 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:53.493 15:37:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:53.493 ************************************ 00:31:53.493 START TEST fio_dif_1_multi_subsystems 00:31:53.493 ************************************ 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:53.493 bdev_null0 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:53.493 [2024-07-15 15:37:01.318073] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:53.493 bdev_null1 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:53.493 { 00:31:53.493 "params": { 00:31:53.493 "name": "Nvme$subsystem", 00:31:53.493 "trtype": "$TEST_TRANSPORT", 00:31:53.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.493 "adrfam": "ipv4", 00:31:53.493 "trsvcid": "$NVMF_PORT", 00:31:53.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.493 "hdgst": ${hdgst:-false}, 00:31:53.493 "ddgst": ${ddgst:-false} 00:31:53.493 }, 00:31:53.493 "method": "bdev_nvme_attach_controller" 00:31:53.493 } 00:31:53.493 EOF 00:31:53.493 )") 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:53.493 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:53.494 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:53.494 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:53.494 { 00:31:53.494 "params": { 00:31:53.494 "name": "Nvme$subsystem", 00:31:53.494 "trtype": "$TEST_TRANSPORT", 00:31:53.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.494 "adrfam": "ipv4", 00:31:53.494 "trsvcid": "$NVMF_PORT", 00:31:53.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.494 "hdgst": ${hdgst:-false}, 00:31:53.494 "ddgst": ${ddgst:-false} 00:31:53.494 }, 00:31:53.494 "method": "bdev_nvme_attach_controller" 00:31:53.494 } 00:31:53.494 EOF 00:31:53.494 )") 00:31:53.494 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:53.494 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:53.494 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:53.494 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:53.494 "params": { 00:31:53.494 "name": "Nvme0", 00:31:53.494 "trtype": "tcp", 00:31:53.494 "traddr": "10.0.0.2", 00:31:53.494 "adrfam": "ipv4", 00:31:53.494 "trsvcid": "4420", 00:31:53.494 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:53.494 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:53.494 "hdgst": false, 00:31:53.494 "ddgst": false 00:31:53.494 }, 00:31:53.494 "method": "bdev_nvme_attach_controller" 00:31:53.494 },{ 00:31:53.494 "params": { 00:31:53.494 "name": "Nvme1", 00:31:53.494 "trtype": "tcp", 00:31:53.494 "traddr": "10.0.0.2", 00:31:53.494 "adrfam": "ipv4", 00:31:53.494 "trsvcid": "4420", 00:31:53.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:53.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:53.494 "hdgst": false, 00:31:53.494 "ddgst": false 00:31:53.494 }, 00:31:53.494 "method": "bdev_nvme_attach_controller" 00:31:53.494 }' 00:31:53.494 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:53.494 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:53.494 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.494 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:53.494 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:53.494 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:53.494 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:53.494 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:53.494 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:53.494 15:37:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:53.494 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:53.494 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:53.494 fio-3.35 00:31:53.494 Starting 2 threads 00:31:53.494 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.572 00:32:03.572 filename0: (groupid=0, jobs=1): err= 0: pid=921018: Mon Jul 15 15:37:12 2024 00:32:03.572 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10037msec) 00:32:03.572 slat (nsec): min=5407, max=32812, avg=6295.26, stdev=1455.38 00:32:03.572 clat (usec): min=40897, max=43066, avg=41978.26, stdev=120.60 00:32:03.572 lat (usec): min=40905, max=43099, avg=41984.56, stdev=120.87 00:32:03.572 clat percentiles (usec): 00:32:03.572 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:32:03.572 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:03.572 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:03.572 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:32:03.572 | 99.99th=[43254] 00:32:03.572 bw ( KiB/s): min= 352, max= 384, per=49.87%, avg=380.80, stdev= 9.85, samples=20 00:32:03.572 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:32:03.572 lat (msec) : 50=100.00% 00:32:03.572 cpu : usr=96.89%, sys=2.84%, ctx=33, majf=0, minf=59 00:32:03.572 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:03.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.572 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.572 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:03.572 filename1: (groupid=0, jobs=1): err= 0: pid=921019: Mon Jul 15 15:37:12 2024 00:32:03.572 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10037msec) 00:32:03.572 slat (nsec): min=5401, max=33158, avg=6361.65, stdev=1632.23 00:32:03.572 clat (usec): min=40930, max=42075, avg=41977.87, stdev=74.85 00:32:03.572 lat (usec): min=40938, max=42080, avg=41984.23, stdev=74.65 00:32:03.572 clat percentiles (usec): 00:32:03.572 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:32:03.572 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:03.572 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:03.572 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:03.572 | 99.99th=[42206] 00:32:03.572 bw ( KiB/s): min= 352, max= 384, per=49.87%, avg=380.80, stdev= 9.85, samples=20 00:32:03.572 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:32:03.572 lat (msec) : 50=100.00% 00:32:03.572 cpu : usr=96.71%, sys=3.09%, ctx=12, majf=0, minf=167 00:32:03.572 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:03.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.572 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.572 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:03.572 00:32:03.572 Run status group 0 (all jobs): 00:32:03.572 READ: bw=762KiB/s (780kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=7648KiB (7832kB), run=10037-10037msec 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.572 00:32:03.572 real 0m11.404s 00:32:03.572 user 0m34.527s 00:32:03.572 sys 0m0.901s 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:03.572 15:37:12 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:03.572 ************************************ 00:32:03.572 END TEST fio_dif_1_multi_subsystems 00:32:03.572 ************************************ 00:32:03.572 15:37:12 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:03.572 15:37:12 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:03.572 15:37:12 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:03.572 15:37:12 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:03.572 15:37:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:03.572 ************************************ 00:32:03.572 START TEST fio_dif_rand_params 00:32:03.572 ************************************ 00:32:03.572 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:32:03.572 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:03.572 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:03.572 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:03.572 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:03.572 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:03.572 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:03.572 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:03.572 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:03.572 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:03.572 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:03.572 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:03.572 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:03.572 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:03.572 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.572 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:03.573 bdev_null0 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:03.573 [2024-07-15 15:37:12.798355] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:03.573 { 00:32:03.573 "params": { 00:32:03.573 "name": "Nvme$subsystem", 00:32:03.573 "trtype": "$TEST_TRANSPORT", 00:32:03.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.573 "adrfam": "ipv4", 00:32:03.573 "trsvcid": "$NVMF_PORT", 00:32:03.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.573 "hdgst": ${hdgst:-false}, 00:32:03.573 "ddgst": ${ddgst:-false} 00:32:03.573 }, 00:32:03.573 "method": "bdev_nvme_attach_controller" 00:32:03.573 } 00:32:03.573 EOF 00:32:03.573 )") 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:03.573 "params": { 00:32:03.573 "name": "Nvme0", 00:32:03.573 "trtype": "tcp", 00:32:03.573 "traddr": "10.0.0.2", 00:32:03.573 "adrfam": "ipv4", 00:32:03.573 "trsvcid": "4420", 00:32:03.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:03.573 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:03.573 "hdgst": false, 00:32:03.573 "ddgst": false 00:32:03.573 }, 00:32:03.573 "method": "bdev_nvme_attach_controller" 00:32:03.573 }' 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:03.573 15:37:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:03.832 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:03.832 ... 00:32:03.832 fio-3.35 00:32:03.832 Starting 3 threads 00:32:03.832 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.115 00:32:09.116 filename0: (groupid=0, jobs=1): err= 0: pid=923284: Mon Jul 15 15:37:18 2024 00:32:09.116 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(146MiB/5045msec) 00:32:09.116 slat (nsec): min=5455, max=34270, avg=8354.59, stdev=1698.80 00:32:09.116 clat (usec): min=6184, max=54010, avg=12899.06, stdev=8089.89 00:32:09.116 lat (usec): min=6192, max=54020, avg=12907.42, stdev=8089.88 00:32:09.116 clat percentiles (usec): 00:32:09.116 | 1.00th=[ 7046], 5.00th=[ 7832], 10.00th=[ 8356], 20.00th=[ 9241], 00:32:09.116 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11207], 60.00th=[11863], 00:32:09.116 | 70.00th=[12518], 80.00th=[13698], 90.00th=[15139], 95.00th=[16712], 00:32:09.116 | 99.00th=[51119], 99.50th=[52167], 99.90th=[53216], 99.95th=[54264], 00:32:09.116 | 99.99th=[54264] 00:32:09.116 bw ( KiB/s): min=24320, max=36096, per=34.26%, avg=29875.20, stdev=4263.34, samples=10 00:32:09.116 iops : min= 190, max= 282, avg=233.40, stdev=33.31, samples=10 00:32:09.116 lat (msec) : 10=30.11%, 20=65.61%, 50=2.48%, 100=1.80% 00:32:09.116 cpu : usr=96.69%, sys=3.05%, ctx=11, majf=0, minf=48 00:32:09.116 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:09.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.116 issued rwts: total=1169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.116 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:09.116 filename0: (groupid=0, jobs=1): err= 0: pid=923285: Mon Jul 15 15:37:18 2024 00:32:09.116 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(134MiB/5005msec) 00:32:09.116 slat (nsec): min=5503, max=36928, avg=8197.08, stdev=1773.55 00:32:09.116 clat (usec): min=4925, max=90027, avg=13965.05, stdev=11145.60 00:32:09.116 lat (usec): min=4935, max=90035, avg=13973.24, stdev=11145.55 00:32:09.116 clat percentiles (usec): 00:32:09.116 | 1.00th=[ 5604], 5.00th=[ 7046], 10.00th=[ 7963], 20.00th=[ 9372], 00:32:09.116 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[10945], 60.00th=[11863], 00:32:09.116 | 70.00th=[12649], 80.00th=[13829], 90.00th=[15401], 95.00th=[49546], 00:32:09.116 | 99.00th=[52691], 99.50th=[54789], 99.90th=[89654], 99.95th=[89654], 00:32:09.116 | 99.99th=[89654] 00:32:09.116 bw ( KiB/s): min=20224, max=37120, per=31.47%, avg=27443.20, stdev=4706.19, samples=10 00:32:09.116 iops : min= 158, max= 290, avg=214.40, stdev=36.77, samples=10 00:32:09.116 lat (msec) : 10=31.19%, 20=61.64%, 50=2.70%, 100=4.47% 00:32:09.116 cpu : usr=96.24%, sys=3.48%, ctx=10, majf=0, minf=73 00:32:09.116 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:09.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.116 issued rwts: total=1074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.116 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:09.116 filename0: (groupid=0, jobs=1): err= 0: pid=923286: Mon Jul 15 15:37:18 2024 00:32:09.116 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(149MiB/5004msec) 00:32:09.116 slat (nsec): min=5500, max=33318, avg=8171.57, stdev=2121.25 00:32:09.116 clat (usec): min=3802, max=91603, avg=12558.76, stdev=7650.74 00:32:09.116 lat (usec): min=3809, max=91614, avg=12566.93, stdev=7650.88 00:32:09.116 clat percentiles (usec): 00:32:09.116 | 1.00th=[ 6325], 5.00th=[ 7504], 10.00th=[ 8225], 20.00th=[ 9241], 00:32:09.116 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11207], 60.00th=[11731], 00:32:09.116 | 70.00th=[12518], 80.00th=[13566], 90.00th=[15008], 95.00th=[16188], 00:32:09.116 | 99.00th=[51119], 99.50th=[52167], 99.90th=[54264], 99.95th=[91751], 00:32:09.116 | 99.99th=[91751] 00:32:09.116 bw ( KiB/s): min=22528, max=35840, per=35.00%, avg=30520.50, stdev=4305.54, samples=10 00:32:09.116 iops : min= 176, max= 280, avg=238.40, stdev=33.68, samples=10 00:32:09.116 lat (msec) : 4=0.08%, 10=29.31%, 20=67.17%, 50=1.84%, 100=1.59% 00:32:09.116 cpu : usr=95.80%, sys=3.68%, ctx=260, majf=0, minf=174 00:32:09.116 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:09.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:09.116 issued rwts: total=1194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:09.116 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:09.116 00:32:09.116 Run status group 0 (all jobs): 00:32:09.116 READ: bw=85.2MiB/s (89.3MB/s), 26.8MiB/s-29.8MiB/s (28.1MB/s-31.3MB/s), io=430MiB (450MB), run=5004-5045msec 00:32:09.375 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:09.375 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:09.375 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:09.375 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:09.375 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:09.375 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:09.375 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.375 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.375 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.376 bdev_null0 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.376 [2024-07-15 15:37:18.906641] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.376 bdev_null1 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.376 bdev_null2 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.376 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.636 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.636 15:37:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:09.636 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.636 15:37:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:09.636 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.636 15:37:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:09.636 15:37:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:09.637 { 00:32:09.637 "params": { 00:32:09.637 "name": "Nvme$subsystem", 00:32:09.637 "trtype": "$TEST_TRANSPORT", 00:32:09.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.637 "adrfam": "ipv4", 00:32:09.637 "trsvcid": "$NVMF_PORT", 00:32:09.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.637 "hdgst": ${hdgst:-false}, 00:32:09.637 "ddgst": ${ddgst:-false} 00:32:09.637 }, 00:32:09.637 "method": "bdev_nvme_attach_controller" 00:32:09.637 } 00:32:09.637 EOF 00:32:09.637 )") 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:09.637 { 00:32:09.637 "params": { 00:32:09.637 "name": "Nvme$subsystem", 00:32:09.637 "trtype": "$TEST_TRANSPORT", 00:32:09.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.637 "adrfam": "ipv4", 00:32:09.637 "trsvcid": "$NVMF_PORT", 00:32:09.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.637 "hdgst": ${hdgst:-false}, 00:32:09.637 "ddgst": ${ddgst:-false} 00:32:09.637 }, 00:32:09.637 "method": "bdev_nvme_attach_controller" 00:32:09.637 } 00:32:09.637 EOF 00:32:09.637 )") 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:09.637 { 00:32:09.637 "params": { 00:32:09.637 "name": "Nvme$subsystem", 00:32:09.637 "trtype": "$TEST_TRANSPORT", 00:32:09.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.637 "adrfam": "ipv4", 00:32:09.637 "trsvcid": "$NVMF_PORT", 00:32:09.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.637 "hdgst": ${hdgst:-false}, 00:32:09.637 "ddgst": ${ddgst:-false} 00:32:09.637 }, 00:32:09.637 "method": "bdev_nvme_attach_controller" 00:32:09.637 } 00:32:09.637 EOF 00:32:09.637 )") 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:09.637 "params": { 00:32:09.637 "name": "Nvme0", 00:32:09.637 "trtype": "tcp", 00:32:09.637 "traddr": "10.0.0.2", 00:32:09.637 "adrfam": "ipv4", 00:32:09.637 "trsvcid": "4420", 00:32:09.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:09.637 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:09.637 "hdgst": false, 00:32:09.637 "ddgst": false 00:32:09.637 }, 00:32:09.637 "method": "bdev_nvme_attach_controller" 00:32:09.637 },{ 00:32:09.637 "params": { 00:32:09.637 "name": "Nvme1", 00:32:09.637 "trtype": "tcp", 00:32:09.637 "traddr": "10.0.0.2", 00:32:09.637 "adrfam": "ipv4", 00:32:09.637 "trsvcid": "4420", 00:32:09.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:09.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:09.637 "hdgst": false, 00:32:09.637 "ddgst": false 00:32:09.637 }, 00:32:09.637 "method": "bdev_nvme_attach_controller" 00:32:09.637 },{ 00:32:09.637 "params": { 00:32:09.637 "name": "Nvme2", 00:32:09.637 "trtype": "tcp", 00:32:09.637 "traddr": "10.0.0.2", 00:32:09.637 "adrfam": "ipv4", 00:32:09.637 "trsvcid": "4420", 00:32:09.637 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:09.637 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:09.637 "hdgst": false, 00:32:09.637 "ddgst": false 00:32:09.637 }, 00:32:09.637 "method": "bdev_nvme_attach_controller" 00:32:09.637 }' 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:09.637 15:37:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:09.897 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:09.897 ... 00:32:09.897 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:09.897 ... 00:32:09.897 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:09.897 ... 00:32:09.897 fio-3.35 00:32:09.897 Starting 24 threads 00:32:09.897 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.129 00:32:22.129 filename0: (groupid=0, jobs=1): err= 0: pid=924794: Mon Jul 15 15:37:30 2024 00:32:22.129 read: IOPS=510, BW=2042KiB/s (2091kB/s)(20.0MiB/10012msec) 00:32:22.129 slat (nsec): min=4116, max=73216, avg=15097.00, stdev=12486.53 00:32:22.129 clat (usec): min=1199, max=43214, avg=31221.33, stdev=4577.71 00:32:22.129 lat (usec): min=1207, max=43223, avg=31236.43, stdev=4578.69 00:32:22.129 clat percentiles (usec): 00:32:22.129 | 1.00th=[ 6063], 5.00th=[26608], 10.00th=[31589], 20.00th=[31851], 00:32:22.129 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:22.129 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[32900], 00:32:22.129 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[43254], 00:32:22.129 | 99.99th=[43254] 00:32:22.129 bw ( KiB/s): min= 1916, max= 2992, per=4.26%, avg=2036.84, stdev=240.05, samples=19 00:32:22.129 iops : min= 479, max= 748, avg=509.21, stdev=60.01, samples=19 00:32:22.129 lat (msec) : 2=0.63%, 4=0.18%, 10=1.47%, 20=1.29%, 50=96.44% 00:32:22.129 cpu : usr=99.07%, sys=0.56%, ctx=103, majf=0, minf=0 00:32:22.129 IO depths : 1=6.0%, 2=12.0%, 4=24.3%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:22.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.129 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.129 issued rwts: total=5110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.129 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.129 filename0: (groupid=0, jobs=1): err= 0: pid=924795: Mon Jul 15 15:37:30 2024 00:32:22.129 read: IOPS=503, BW=2016KiB/s (2064kB/s)(19.7MiB/10002msec) 00:32:22.129 slat (nsec): min=5683, max=73347, avg=12445.02, stdev=9868.36 00:32:22.129 clat (usec): min=4720, max=34639, avg=31647.42, stdev=3301.10 00:32:22.129 lat (usec): min=4729, max=34648, avg=31659.86, stdev=3300.12 00:32:22.129 clat percentiles (usec): 00:32:22.129 | 1.00th=[10159], 5.00th=[31327], 10.00th=[31851], 20.00th=[31851], 00:32:22.129 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:32:22.129 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[32900], 00:32:22.129 | 99.00th=[33424], 99.50th=[34341], 99.90th=[34341], 99.95th=[34866], 00:32:22.129 | 99.99th=[34866] 00:32:22.129 bw ( KiB/s): min= 1920, max= 2432, per=4.20%, avg=2007.58, stdev=121.08, samples=19 00:32:22.129 iops : min= 480, max= 608, avg=501.89, stdev=30.27, samples=19 00:32:22.129 lat (msec) : 10=0.95%, 20=1.27%, 50=97.78% 00:32:22.129 cpu : usr=99.01%, sys=0.64%, ctx=75, majf=0, minf=9 00:32:22.129 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:22.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.129 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.129 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.129 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.129 filename0: (groupid=0, jobs=1): err= 0: pid=924796: Mon Jul 15 15:37:30 2024 00:32:22.129 read: IOPS=504, BW=2017KiB/s (2065kB/s)(19.7MiB/10026msec) 00:32:22.129 slat (nsec): min=5644, max=73287, avg=14027.09, stdev=9784.25 00:32:22.129 clat (usec): min=11808, max=55943, avg=31532.60, stdev=3286.53 00:32:22.129 lat (usec): min=11817, max=55949, avg=31546.63, stdev=3287.37 00:32:22.129 clat percentiles (usec): 00:32:22.129 | 1.00th=[16188], 5.00th=[25822], 10.00th=[31589], 20.00th=[31851], 00:32:22.129 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:22.129 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:32:22.129 | 99.00th=[33424], 99.50th=[35390], 99.90th=[55837], 99.95th=[55837], 00:32:22.129 | 99.99th=[55837] 00:32:22.129 bw ( KiB/s): min= 1920, max= 2224, per=4.23%, avg=2021.60, stdev=104.63, samples=20 00:32:22.129 iops : min= 480, max= 556, avg=505.40, stdev=26.16, samples=20 00:32:22.129 lat (msec) : 20=2.26%, 50=97.35%, 100=0.40% 00:32:22.129 cpu : usr=98.89%, sys=0.78%, ctx=66, majf=0, minf=9 00:32:22.129 IO depths : 1=5.8%, 2=11.7%, 4=23.9%, 8=51.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:32:22.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.129 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.129 issued rwts: total=5055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.129 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.129 filename0: (groupid=0, jobs=1): err= 0: pid=924797: Mon Jul 15 15:37:30 2024 00:32:22.129 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10010msec) 00:32:22.129 slat (usec): min=5, max=102, avg=16.88, stdev=12.22 00:32:22.129 clat (usec): min=12333, max=50295, avg=31940.63, stdev=1937.85 00:32:22.129 lat (usec): min=12351, max=50317, avg=31957.51, stdev=1937.80 00:32:22.129 clat percentiles (usec): 00:32:22.129 | 1.00th=[21103], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:32:22.129 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:22.129 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:32:22.129 | 99.00th=[33817], 99.50th=[34341], 99.90th=[46924], 99.95th=[48497], 00:32:22.129 | 99.99th=[50070] 00:32:22.129 bw ( KiB/s): min= 1904, max= 2048, per=4.17%, avg=1994.11, stdev=65.15, samples=19 00:32:22.129 iops : min= 476, max= 512, avg=498.53, stdev=16.29, samples=19 00:32:22.129 lat (msec) : 20=0.76%, 50=99.20%, 100=0.04% 00:32:22.129 cpu : usr=99.16%, sys=0.51%, ctx=75, majf=0, minf=9 00:32:22.129 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:32:22.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.130 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.130 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.130 filename0: (groupid=0, jobs=1): err= 0: pid=924798: Mon Jul 15 15:37:30 2024 00:32:22.130 read: IOPS=501, BW=2007KiB/s (2056kB/s)(19.6MiB/10023msec) 00:32:22.130 slat (usec): min=5, max=109, avg=25.11, stdev=19.08 00:32:22.130 clat (usec): min=15370, max=50935, avg=31678.73, stdev=3678.23 00:32:22.130 lat (usec): min=15376, max=50961, avg=31703.85, stdev=3679.83 00:32:22.130 clat percentiles (usec): 00:32:22.130 | 1.00th=[20055], 5.00th=[23725], 10.00th=[28443], 20.00th=[31589], 00:32:22.130 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:32:22.130 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[35914], 00:32:22.130 | 99.00th=[44303], 99.50th=[47449], 99.90th=[51119], 99.95th=[51119], 00:32:22.130 | 99.99th=[51119] 00:32:22.130 bw ( KiB/s): min= 1792, max= 2256, per=4.22%, avg=2016.84, stdev=120.14, samples=19 00:32:22.130 iops : min= 448, max= 564, avg=504.21, stdev=30.04, samples=19 00:32:22.130 lat (msec) : 20=0.91%, 50=98.77%, 100=0.32% 00:32:22.130 cpu : usr=99.03%, sys=0.70%, ctx=16, majf=0, minf=9 00:32:22.130 IO depths : 1=4.9%, 2=9.8%, 4=20.7%, 8=56.7%, 16=8.0%, 32=0.0%, >=64=0.0% 00:32:22.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.130 complete : 0=0.0%, 4=93.0%, 8=1.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.130 issued rwts: total=5030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.130 filename0: (groupid=0, jobs=1): err= 0: pid=924799: Mon Jul 15 15:37:30 2024 00:32:22.130 read: IOPS=495, BW=1981KiB/s (2028kB/s)(19.4MiB/10016msec) 00:32:22.130 slat (nsec): min=5708, max=55328, avg=12778.69, stdev=7654.68 00:32:22.130 clat (usec): min=18222, max=50629, avg=32196.02, stdev=1479.48 00:32:22.130 lat (usec): min=18228, max=50655, avg=32208.80, stdev=1479.37 00:32:22.130 clat percentiles (usec): 00:32:22.130 | 1.00th=[26608], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:32:22.130 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:22.130 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:32:22.130 | 99.00th=[39060], 99.50th=[41157], 99.90th=[46924], 99.95th=[46924], 00:32:22.130 | 99.99th=[50594] 00:32:22.130 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1980.63, stdev=62.56, samples=19 00:32:22.130 iops : min= 480, max= 512, avg=495.16, stdev=15.64, samples=19 00:32:22.130 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:32:22.130 cpu : usr=99.10%, sys=0.62%, ctx=12, majf=0, minf=9 00:32:22.130 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:32:22.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.130 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.130 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.130 filename0: (groupid=0, jobs=1): err= 0: pid=924800: Mon Jul 15 15:37:30 2024 00:32:22.130 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10004msec) 00:32:22.130 slat (nsec): min=5678, max=71963, avg=16474.92, stdev=9821.30 00:32:22.130 clat (usec): min=12041, max=61852, avg=32125.12, stdev=1435.40 00:32:22.130 lat (usec): min=12051, max=61876, avg=32141.59, stdev=1435.51 00:32:22.130 clat percentiles (usec): 00:32:22.130 | 1.00th=[30540], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:32:22.130 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:22.130 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:32:22.130 | 99.00th=[34341], 99.50th=[34341], 99.90th=[47973], 99.95th=[54264], 00:32:22.130 | 99.99th=[61604] 00:32:22.130 bw ( KiB/s): min= 1916, max= 2048, per=4.14%, avg=1980.42, stdev=64.34, samples=19 00:32:22.130 iops : min= 479, max= 512, avg=495.11, stdev=16.09, samples=19 00:32:22.130 lat (msec) : 20=0.44%, 50=99.48%, 100=0.08% 00:32:22.130 cpu : usr=98.96%, sys=0.66%, ctx=102, majf=0, minf=9 00:32:22.130 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:32:22.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.130 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.130 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.130 filename0: (groupid=0, jobs=1): err= 0: pid=924801: Mon Jul 15 15:37:30 2024 00:32:22.130 read: IOPS=495, BW=1983KiB/s (2030kB/s)(19.4MiB/10007msec) 00:32:22.130 slat (nsec): min=5865, max=85424, avg=26795.31, stdev=14123.45 00:32:22.130 clat (usec): min=11572, max=56171, avg=32047.04, stdev=1983.29 00:32:22.130 lat (usec): min=11605, max=56187, avg=32073.83, stdev=1982.66 00:32:22.130 clat percentiles (usec): 00:32:22.130 | 1.00th=[28967], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:32:22.130 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:32:22.130 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:32:22.130 | 99.00th=[33817], 99.50th=[36439], 99.90th=[56361], 99.95th=[56361], 00:32:22.130 | 99.99th=[56361] 00:32:22.130 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1973.89, stdev=77.69, samples=19 00:32:22.130 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:32:22.130 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:32:22.130 cpu : usr=98.67%, sys=0.86%, ctx=85, majf=0, minf=9 00:32:22.130 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:22.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.130 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.130 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.130 filename1: (groupid=0, jobs=1): err= 0: pid=924802: Mon Jul 15 15:37:30 2024 00:32:22.130 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.5MiB/10052msec) 00:32:22.130 slat (nsec): min=5655, max=78078, avg=14577.36, stdev=11153.57 00:32:22.130 clat (usec): min=18747, max=70736, avg=32008.69, stdev=2953.12 00:32:22.130 lat (usec): min=18753, max=70742, avg=32023.26, stdev=2953.01 00:32:22.130 clat percentiles (usec): 00:32:22.130 | 1.00th=[23200], 5.00th=[26346], 10.00th=[31589], 20.00th=[31851], 00:32:22.130 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:22.130 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33817], 00:32:22.130 | 99.00th=[40109], 99.50th=[42730], 99.90th=[70779], 99.95th=[70779], 00:32:22.130 | 99.99th=[70779] 00:32:22.130 bw ( KiB/s): min= 1920, max= 2144, per=4.18%, avg=1996.00, stdev=66.05, samples=20 00:32:22.130 iops : min= 480, max= 536, avg=499.00, stdev=16.51, samples=20 00:32:22.130 lat (msec) : 20=0.36%, 50=99.40%, 100=0.24% 00:32:22.130 cpu : usr=98.58%, sys=1.09%, ctx=92, majf=0, minf=9 00:32:22.130 IO depths : 1=4.8%, 2=9.8%, 4=20.9%, 8=56.7%, 16=7.8%, 32=0.0%, >=64=0.0% 00:32:22.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.130 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.130 issued rwts: total=5002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.130 filename1: (groupid=0, jobs=1): err= 0: pid=924803: Mon Jul 15 15:37:30 2024 00:32:22.130 read: IOPS=543, BW=2173KiB/s (2225kB/s)(21.3MiB/10026msec) 00:32:22.130 slat (nsec): min=5643, max=70146, avg=9534.31, stdev=6565.02 00:32:22.130 clat (usec): min=10911, max=48242, avg=29382.21, stdev=5457.75 00:32:22.130 lat (usec): min=10920, max=48248, avg=29391.75, stdev=5458.90 00:32:22.130 clat percentiles (usec): 00:32:22.130 | 1.00th=[12125], 5.00th=[17957], 10.00th=[20841], 20.00th=[25297], 00:32:22.130 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:32:22.130 | 70.00th=[32113], 80.00th=[32113], 90.00th=[32637], 95.00th=[32900], 00:32:22.130 | 99.00th=[37487], 99.50th=[40633], 99.90th=[46924], 99.95th=[47973], 00:32:22.130 | 99.99th=[48497] 00:32:22.130 bw ( KiB/s): min= 1920, max= 2600, per=4.54%, avg=2172.20, stdev=227.62, samples=20 00:32:22.130 iops : min= 480, max= 650, avg=543.05, stdev=56.91, samples=20 00:32:22.130 lat (msec) : 20=9.13%, 50=90.87% 00:32:22.130 cpu : usr=99.28%, sys=0.45%, ctx=11, majf=0, minf=9 00:32:22.130 IO depths : 1=3.8%, 2=8.0%, 4=18.4%, 8=60.8%, 16=8.9%, 32=0.0%, >=64=0.0% 00:32:22.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.130 complete : 0=0.0%, 4=92.3%, 8=2.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.130 issued rwts: total=5446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.130 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.130 filename1: (groupid=0, jobs=1): err= 0: pid=924804: Mon Jul 15 15:37:30 2024 00:32:22.130 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10005msec) 00:32:22.130 slat (usec): min=5, max=107, avg=30.67, stdev=16.59 00:32:22.130 clat (usec): min=11403, max=54652, avg=32003.45, stdev=1927.77 00:32:22.130 lat (usec): min=11426, max=54669, avg=32034.12, stdev=1926.74 00:32:22.130 clat percentiles (usec): 00:32:22.130 | 1.00th=[28967], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:32:22.130 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:32:22.130 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:32:22.130 | 99.00th=[33817], 99.50th=[34866], 99.90th=[54789], 99.95th=[54789], 00:32:22.130 | 99.99th=[54789] 00:32:22.130 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1973.63, stdev=77.44, samples=19 00:32:22.130 iops : min= 448, max= 512, avg=493.37, stdev=19.32, samples=19 00:32:22.130 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:32:22.130 cpu : usr=98.13%, sys=1.03%, ctx=256, majf=0, minf=9 00:32:22.130 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:22.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.131 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.131 filename1: (groupid=0, jobs=1): err= 0: pid=924805: Mon Jul 15 15:37:30 2024 00:32:22.131 read: IOPS=496, BW=1985KiB/s (2033kB/s)(19.4MiB/10015msec) 00:32:22.131 slat (usec): min=5, max=107, avg=19.93, stdev=16.89 00:32:22.131 clat (usec): min=13509, max=55481, avg=32085.79, stdev=3276.21 00:32:22.131 lat (usec): min=13516, max=55503, avg=32105.73, stdev=3276.27 00:32:22.131 clat percentiles (usec): 00:32:22.131 | 1.00th=[22414], 5.00th=[26346], 10.00th=[31327], 20.00th=[31589], 00:32:22.131 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:22.131 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[37487], 00:32:22.131 | 99.00th=[44827], 99.50th=[50070], 99.90th=[52691], 99.95th=[52691], 00:32:22.131 | 99.99th=[55313] 00:32:22.131 bw ( KiB/s): min= 1795, max= 2048, per=4.14%, avg=1979.95, stdev=74.22, samples=19 00:32:22.131 iops : min= 448, max= 512, avg=494.95, stdev=18.66, samples=19 00:32:22.131 lat (msec) : 20=0.76%, 50=98.67%, 100=0.56% 00:32:22.131 cpu : usr=99.11%, sys=0.62%, ctx=19, majf=0, minf=9 00:32:22.131 IO depths : 1=3.9%, 2=8.5%, 4=19.9%, 8=58.9%, 16=8.8%, 32=0.0%, >=64=0.0% 00:32:22.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 complete : 0=0.0%, 4=92.8%, 8=1.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 issued rwts: total=4970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.131 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.131 filename1: (groupid=0, jobs=1): err= 0: pid=924806: Mon Jul 15 15:37:30 2024 00:32:22.131 read: IOPS=501, BW=2007KiB/s (2055kB/s)(19.6MiB/10013msec) 00:32:22.131 slat (usec): min=5, max=109, avg=14.32, stdev=10.38 00:32:22.131 clat (usec): min=4398, max=38037, avg=31757.19, stdev=2946.12 00:32:22.131 lat (usec): min=4417, max=38046, avg=31771.52, stdev=2945.26 00:32:22.131 clat percentiles (usec): 00:32:22.131 | 1.00th=[11994], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:32:22.131 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:22.131 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:32:22.131 | 99.00th=[33817], 99.50th=[34341], 99.90th=[38011], 99.95th=[38011], 00:32:22.131 | 99.99th=[38011] 00:32:22.131 bw ( KiB/s): min= 1920, max= 2432, per=4.19%, avg=2003.20, stdev=119.46, samples=20 00:32:22.131 iops : min= 480, max= 608, avg=500.80, stdev=29.87, samples=20 00:32:22.131 lat (msec) : 10=0.96%, 20=0.32%, 50=98.73% 00:32:22.131 cpu : usr=98.72%, sys=0.85%, ctx=142, majf=0, minf=9 00:32:22.131 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:22.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.131 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.131 filename1: (groupid=0, jobs=1): err= 0: pid=924807: Mon Jul 15 15:37:30 2024 00:32:22.131 read: IOPS=495, BW=1983KiB/s (2030kB/s)(19.4MiB/10006msec) 00:32:22.131 slat (nsec): min=5672, max=99231, avg=27247.98, stdev=18966.61 00:32:22.131 clat (usec): min=22505, max=39664, avg=32044.75, stdev=902.47 00:32:22.131 lat (usec): min=22522, max=39732, avg=32072.00, stdev=899.51 00:32:22.131 clat percentiles (usec): 00:32:22.131 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:32:22.131 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:22.131 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:32:22.131 | 99.00th=[33817], 99.50th=[34866], 99.90th=[37487], 99.95th=[38536], 00:32:22.131 | 99.99th=[39584] 00:32:22.131 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1980.37, stdev=65.39, samples=19 00:32:22.131 iops : min= 480, max= 512, avg=495.05, stdev=16.31, samples=19 00:32:22.131 lat (msec) : 50=100.00% 00:32:22.131 cpu : usr=99.18%, sys=0.53%, ctx=36, majf=0, minf=9 00:32:22.131 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:22.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.131 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.131 filename1: (groupid=0, jobs=1): err= 0: pid=924808: Mon Jul 15 15:37:30 2024 00:32:22.131 read: IOPS=495, BW=1983KiB/s (2030kB/s)(19.4MiB/10006msec) 00:32:22.131 slat (nsec): min=5675, max=98998, avg=30078.83, stdev=17602.96 00:32:22.131 clat (usec): min=10995, max=55378, avg=31979.95, stdev=2056.63 00:32:22.131 lat (usec): min=11011, max=55395, avg=32010.03, stdev=2056.91 00:32:22.131 clat percentiles (usec): 00:32:22.131 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:32:22.131 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:32:22.131 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:32:22.131 | 99.00th=[33817], 99.50th=[38536], 99.90th=[55313], 99.95th=[55313], 00:32:22.131 | 99.99th=[55313] 00:32:22.131 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1974.05, stdev=77.30, samples=19 00:32:22.131 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:32:22.131 lat (msec) : 20=0.65%, 50=99.03%, 100=0.32% 00:32:22.131 cpu : usr=98.76%, sys=0.71%, ctx=102, majf=0, minf=9 00:32:22.131 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:22.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.131 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.131 filename1: (groupid=0, jobs=1): err= 0: pid=924809: Mon Jul 15 15:37:30 2024 00:32:22.131 read: IOPS=494, BW=1980KiB/s (2027kB/s)(19.4MiB/10021msec) 00:32:22.131 slat (nsec): min=5671, max=96946, avg=30254.75, stdev=16290.79 00:32:22.131 clat (usec): min=21502, max=66744, avg=32063.63, stdev=1624.95 00:32:22.131 lat (usec): min=21512, max=66763, avg=32093.89, stdev=1623.72 00:32:22.131 clat percentiles (usec): 00:32:22.131 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:32:22.131 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:32:22.131 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:32:22.131 | 99.00th=[33817], 99.50th=[34866], 99.90th=[66323], 99.95th=[66323], 00:32:22.131 | 99.99th=[66847] 00:32:22.131 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1974.05, stdev=77.30, samples=19 00:32:22.131 iops : min= 448, max= 512, avg=493.47, stdev=19.42, samples=19 00:32:22.131 lat (msec) : 50=99.78%, 100=0.22% 00:32:22.131 cpu : usr=98.99%, sys=0.70%, ctx=67, majf=0, minf=9 00:32:22.131 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:22.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.131 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.131 filename2: (groupid=0, jobs=1): err= 0: pid=924810: Mon Jul 15 15:37:30 2024 00:32:22.131 read: IOPS=495, BW=1983KiB/s (2030kB/s)(19.4MiB/10007msec) 00:32:22.131 slat (usec): min=5, max=107, avg=29.76, stdev=16.70 00:32:22.131 clat (usec): min=11525, max=56410, avg=32024.35, stdev=2017.02 00:32:22.131 lat (usec): min=11539, max=56426, avg=32054.12, stdev=2015.92 00:32:22.131 clat percentiles (usec): 00:32:22.131 | 1.00th=[28705], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:32:22.131 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:32:22.131 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:32:22.131 | 99.00th=[33817], 99.50th=[38536], 99.90th=[56361], 99.95th=[56361], 00:32:22.131 | 99.99th=[56361] 00:32:22.131 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1973.89, stdev=76.40, samples=19 00:32:22.131 iops : min= 448, max= 512, avg=493.47, stdev=19.10, samples=19 00:32:22.131 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:32:22.131 cpu : usr=99.21%, sys=0.52%, ctx=12, majf=0, minf=9 00:32:22.131 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:22.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.131 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.131 filename2: (groupid=0, jobs=1): err= 0: pid=924811: Mon Jul 15 15:37:30 2024 00:32:22.131 read: IOPS=496, BW=1988KiB/s (2036kB/s)(19.4MiB/10005msec) 00:32:22.131 slat (nsec): min=5647, max=82610, avg=20354.58, stdev=12720.83 00:32:22.131 clat (usec): min=10953, max=60124, avg=32020.21, stdev=3370.02 00:32:22.131 lat (usec): min=10959, max=60141, avg=32040.56, stdev=3370.06 00:32:22.131 clat percentiles (usec): 00:32:22.131 | 1.00th=[21103], 5.00th=[28443], 10.00th=[31589], 20.00th=[31851], 00:32:22.131 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:22.131 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[34341], 00:32:22.131 | 99.00th=[42206], 99.50th=[53216], 99.90th=[60031], 99.95th=[60031], 00:32:22.131 | 99.99th=[60031] 00:32:22.131 bw ( KiB/s): min= 1792, max= 2080, per=4.14%, avg=1978.68, stdev=75.75, samples=19 00:32:22.131 iops : min= 448, max= 520, avg=494.63, stdev=18.90, samples=19 00:32:22.131 lat (msec) : 20=0.80%, 50=98.55%, 100=0.64% 00:32:22.131 cpu : usr=99.18%, sys=0.54%, ctx=15, majf=0, minf=9 00:32:22.131 IO depths : 1=4.7%, 2=9.5%, 4=20.1%, 8=57.1%, 16=8.6%, 32=0.0%, >=64=0.0% 00:32:22.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 complete : 0=0.0%, 4=92.9%, 8=2.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 issued rwts: total=4972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.131 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.131 filename2: (groupid=0, jobs=1): err= 0: pid=924812: Mon Jul 15 15:37:30 2024 00:32:22.131 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10005msec) 00:32:22.131 slat (usec): min=5, max=102, avg=33.76, stdev=18.20 00:32:22.131 clat (usec): min=11931, max=54308, avg=31952.00, stdev=1863.54 00:32:22.131 lat (usec): min=11937, max=54326, avg=31985.76, stdev=1863.12 00:32:22.131 clat percentiles (usec): 00:32:22.131 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31327], 20.00th=[31589], 00:32:22.131 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:32:22.131 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:32:22.131 | 99.00th=[33817], 99.50th=[34866], 99.90th=[54264], 99.95th=[54264], 00:32:22.131 | 99.99th=[54264] 00:32:22.131 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1973.63, stdev=77.44, samples=19 00:32:22.131 iops : min= 448, max= 512, avg=493.37, stdev=19.32, samples=19 00:32:22.131 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:32:22.131 cpu : usr=98.82%, sys=0.66%, ctx=122, majf=0, minf=9 00:32:22.131 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:22.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.131 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.132 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.132 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.132 filename2: (groupid=0, jobs=1): err= 0: pid=924813: Mon Jul 15 15:37:30 2024 00:32:22.132 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10005msec) 00:32:22.132 slat (nsec): min=5527, max=96772, avg=29873.75, stdev=16377.77 00:32:22.132 clat (usec): min=7595, max=54299, avg=31996.86, stdev=1981.71 00:32:22.132 lat (usec): min=7601, max=54317, avg=32026.73, stdev=1981.31 00:32:22.132 clat percentiles (usec): 00:32:22.132 | 1.00th=[27132], 5.00th=[31327], 10.00th=[31327], 20.00th=[31589], 00:32:22.132 | 30.00th=[31851], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:32:22.132 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:32:22.132 | 99.00th=[34866], 99.50th=[38011], 99.90th=[54264], 99.95th=[54264], 00:32:22.132 | 99.99th=[54264] 00:32:22.132 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1973.63, stdev=77.44, samples=19 00:32:22.132 iops : min= 448, max= 512, avg=493.37, stdev=19.32, samples=19 00:32:22.132 lat (msec) : 10=0.04%, 20=0.28%, 50=99.35%, 100=0.32% 00:32:22.132 cpu : usr=99.41%, sys=0.32%, ctx=8, majf=0, minf=9 00:32:22.132 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:32:22.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.132 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.132 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.132 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.132 filename2: (groupid=0, jobs=1): err= 0: pid=924814: Mon Jul 15 15:37:30 2024 00:32:22.132 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10013msec) 00:32:22.132 slat (usec): min=5, max=100, avg=22.62, stdev=18.43 00:32:22.132 clat (usec): min=12174, max=55489, avg=32153.49, stdev=1856.89 00:32:22.132 lat (usec): min=12209, max=55514, avg=32176.11, stdev=1855.30 00:32:22.132 clat percentiles (usec): 00:32:22.132 | 1.00th=[28967], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:32:22.132 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:22.132 | 70.00th=[32375], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:32:22.132 | 99.00th=[38536], 99.50th=[40109], 99.90th=[55313], 99.95th=[55313], 00:32:22.132 | 99.99th=[55313] 00:32:22.132 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1973.89, stdev=76.40, samples=19 00:32:22.132 iops : min= 448, max= 512, avg=493.47, stdev=19.10, samples=19 00:32:22.132 lat (msec) : 20=0.12%, 50=99.56%, 100=0.32% 00:32:22.132 cpu : usr=98.48%, sys=0.90%, ctx=78, majf=0, minf=9 00:32:22.132 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:32:22.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.132 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.132 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.132 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.132 filename2: (groupid=0, jobs=1): err= 0: pid=924815: Mon Jul 15 15:37:30 2024 00:32:22.132 read: IOPS=496, BW=1986KiB/s (2033kB/s)(19.5MiB/10046msec) 00:32:22.132 slat (usec): min=5, max=102, avg=24.35, stdev=18.64 00:32:22.132 clat (usec): min=13857, max=55780, avg=31955.84, stdev=3545.12 00:32:22.132 lat (usec): min=13864, max=55787, avg=31980.18, stdev=3544.97 00:32:22.132 clat percentiles (usec): 00:32:22.132 | 1.00th=[20579], 5.00th=[26608], 10.00th=[31065], 20.00th=[31589], 00:32:22.132 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:32:22.132 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[35914], 00:32:22.132 | 99.00th=[47973], 99.50th=[51643], 99.90th=[54264], 99.95th=[55837], 00:32:22.132 | 99.99th=[55837] 00:32:22.132 bw ( KiB/s): min= 1792, max= 2064, per=4.16%, avg=1987.95, stdev=77.42, samples=19 00:32:22.132 iops : min= 448, max= 516, avg=496.95, stdev=19.33, samples=19 00:32:22.132 lat (msec) : 20=0.56%, 50=98.76%, 100=0.68% 00:32:22.132 cpu : usr=99.17%, sys=0.54%, ctx=14, majf=0, minf=9 00:32:22.132 IO depths : 1=3.7%, 2=7.8%, 4=17.2%, 8=61.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:32:22.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.132 complete : 0=0.0%, 4=92.3%, 8=3.3%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.132 issued rwts: total=4987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.132 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.132 filename2: (groupid=0, jobs=1): err= 0: pid=924816: Mon Jul 15 15:37:30 2024 00:32:22.132 read: IOPS=498, BW=1992KiB/s (2040kB/s)(19.5MiB/10022msec) 00:32:22.132 slat (nsec): min=5683, max=52621, avg=12332.54, stdev=7943.97 00:32:22.132 clat (usec): min=9646, max=46505, avg=32019.97, stdev=1968.65 00:32:22.132 lat (usec): min=9666, max=46521, avg=32032.31, stdev=1968.30 00:32:22.132 clat percentiles (usec): 00:32:22.132 | 1.00th=[23462], 5.00th=[31589], 10.00th=[31589], 20.00th=[31851], 00:32:22.132 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:22.132 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:32:22.132 | 99.00th=[34341], 99.50th=[39584], 99.90th=[44827], 99.95th=[44827], 00:32:22.132 | 99.99th=[46400] 00:32:22.132 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1990.40, stdev=65.33, samples=20 00:32:22.132 iops : min= 480, max= 512, avg=497.60, stdev=16.33, samples=20 00:32:22.132 lat (msec) : 10=0.32%, 20=0.44%, 50=99.24% 00:32:22.132 cpu : usr=99.21%, sys=0.53%, ctx=10, majf=0, minf=9 00:32:22.132 IO depths : 1=5.4%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.1%, 32=0.0%, >=64=0.0% 00:32:22.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.132 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.132 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.132 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.132 filename2: (groupid=0, jobs=1): err= 0: pid=924817: Mon Jul 15 15:37:30 2024 00:32:22.132 read: IOPS=495, BW=1983KiB/s (2030kB/s)(19.4MiB/10006msec) 00:32:22.132 slat (nsec): min=5657, max=69879, avg=16718.84, stdev=10216.88 00:32:22.132 clat (usec): min=19664, max=39108, avg=32130.19, stdev=997.80 00:32:22.132 lat (usec): min=19674, max=39131, avg=32146.91, stdev=997.72 00:32:22.132 clat percentiles (usec): 00:32:22.132 | 1.00th=[30802], 5.00th=[31589], 10.00th=[31851], 20.00th=[31851], 00:32:22.132 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:32:22.132 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:32:22.132 | 99.00th=[33817], 99.50th=[34341], 99.90th=[39060], 99.95th=[39060], 00:32:22.132 | 99.99th=[39060] 00:32:22.132 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1980.63, stdev=65.66, samples=19 00:32:22.132 iops : min= 480, max= 512, avg=495.16, stdev=16.42, samples=19 00:32:22.132 lat (msec) : 20=0.32%, 50=99.68% 00:32:22.132 cpu : usr=99.21%, sys=0.51%, ctx=29, majf=0, minf=9 00:32:22.132 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:22.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.132 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.132 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.132 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:22.132 00:32:22.132 Run status group 0 (all jobs): 00:32:22.132 READ: bw=46.7MiB/s (48.9MB/s), 1977KiB/s-2173KiB/s (2025kB/s-2225kB/s), io=469MiB (492MB), run=10002-10052msec 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.132 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.133 bdev_null0 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.133 [2024-07-15 15:37:30.926148] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.133 bdev_null1 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:22.133 { 00:32:22.133 "params": { 00:32:22.133 "name": "Nvme$subsystem", 00:32:22.133 "trtype": "$TEST_TRANSPORT", 00:32:22.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:22.133 "adrfam": "ipv4", 00:32:22.133 "trsvcid": "$NVMF_PORT", 00:32:22.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:22.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:22.133 "hdgst": ${hdgst:-false}, 00:32:22.133 "ddgst": ${ddgst:-false} 00:32:22.133 }, 00:32:22.133 "method": "bdev_nvme_attach_controller" 00:32:22.133 } 00:32:22.133 EOF 00:32:22.133 )") 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:22.133 { 00:32:22.133 "params": { 00:32:22.133 "name": "Nvme$subsystem", 00:32:22.133 "trtype": "$TEST_TRANSPORT", 00:32:22.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:22.133 "adrfam": "ipv4", 00:32:22.133 "trsvcid": "$NVMF_PORT", 00:32:22.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:22.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:22.133 "hdgst": ${hdgst:-false}, 00:32:22.133 "ddgst": ${ddgst:-false} 00:32:22.133 }, 00:32:22.133 "method": "bdev_nvme_attach_controller" 00:32:22.133 } 00:32:22.133 EOF 00:32:22.133 )") 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:32:22.133 15:37:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:32:22.133 15:37:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:22.133 "params": { 00:32:22.133 "name": "Nvme0", 00:32:22.133 "trtype": "tcp", 00:32:22.133 "traddr": "10.0.0.2", 00:32:22.133 "adrfam": "ipv4", 00:32:22.133 "trsvcid": "4420", 00:32:22.133 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:22.133 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:22.133 "hdgst": false, 00:32:22.133 "ddgst": false 00:32:22.133 }, 00:32:22.133 "method": "bdev_nvme_attach_controller" 00:32:22.133 },{ 00:32:22.133 "params": { 00:32:22.133 "name": "Nvme1", 00:32:22.133 "trtype": "tcp", 00:32:22.133 "traddr": "10.0.0.2", 00:32:22.133 "adrfam": "ipv4", 00:32:22.133 "trsvcid": "4420", 00:32:22.133 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:22.133 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:22.133 "hdgst": false, 00:32:22.133 "ddgst": false 00:32:22.133 }, 00:32:22.133 "method": "bdev_nvme_attach_controller" 00:32:22.133 }' 00:32:22.133 15:37:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:22.133 15:37:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:22.133 15:37:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:22.133 15:37:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:22.133 15:37:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:22.133 15:37:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:22.133 15:37:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:22.133 15:37:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:22.133 15:37:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:22.133 15:37:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:22.133 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:22.133 ... 00:32:22.133 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:22.133 ... 00:32:22.133 fio-3.35 00:32:22.133 Starting 4 threads 00:32:22.133 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.424 00:32:27.424 filename0: (groupid=0, jobs=1): err= 0: pid=927004: Mon Jul 15 15:37:36 2024 00:32:27.424 read: IOPS=2024, BW=15.8MiB/s (16.6MB/s)(79.1MiB/5002msec) 00:32:27.424 slat (nsec): min=5407, max=38865, avg=8293.00, stdev=2856.37 00:32:27.424 clat (usec): min=1414, max=7500, avg=3929.13, stdev=755.33 00:32:27.424 lat (usec): min=1434, max=7508, avg=3937.42, stdev=755.24 00:32:27.424 clat percentiles (usec): 00:32:27.424 | 1.00th=[ 2704], 5.00th=[ 3032], 10.00th=[ 3228], 20.00th=[ 3425], 00:32:27.424 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3752], 60.00th=[ 3818], 00:32:27.424 | 70.00th=[ 3949], 80.00th=[ 4228], 90.00th=[ 5276], 95.00th=[ 5669], 00:32:27.424 | 99.00th=[ 6194], 99.50th=[ 6390], 99.90th=[ 6652], 99.95th=[ 7177], 00:32:27.424 | 99.99th=[ 7504] 00:32:27.424 bw ( KiB/s): min=15792, max=16480, per=24.59%, avg=16196.80, stdev=244.76, samples=10 00:32:27.424 iops : min= 1974, max= 2060, avg=2024.60, stdev=30.59, samples=10 00:32:27.424 lat (msec) : 2=0.08%, 4=72.18%, 10=27.74% 00:32:27.424 cpu : usr=97.70%, sys=2.06%, ctx=8, majf=0, minf=0 00:32:27.424 IO depths : 1=0.2%, 2=0.4%, 4=72.3%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.424 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.424 issued rwts: total=10126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:27.424 filename0: (groupid=0, jobs=1): err= 0: pid=927005: Mon Jul 15 15:37:36 2024 00:32:27.424 read: IOPS=2216, BW=17.3MiB/s (18.2MB/s)(86.6MiB/5003msec) 00:32:27.424 slat (nsec): min=5412, max=35413, avg=8290.92, stdev=2187.18 00:32:27.424 clat (usec): min=1915, max=5920, avg=3588.09, stdev=502.56 00:32:27.424 lat (usec): min=1924, max=5925, avg=3596.38, stdev=502.51 00:32:27.424 clat percentiles (usec): 00:32:27.424 | 1.00th=[ 2442], 5.00th=[ 2704], 10.00th=[ 2933], 20.00th=[ 3195], 00:32:27.424 | 30.00th=[ 3359], 40.00th=[ 3523], 50.00th=[ 3621], 60.00th=[ 3654], 00:32:27.424 | 70.00th=[ 3851], 80.00th=[ 3884], 90.00th=[ 4178], 95.00th=[ 4490], 00:32:27.424 | 99.00th=[ 5014], 99.50th=[ 5145], 99.90th=[ 5604], 99.95th=[ 5866], 00:32:27.424 | 99.99th=[ 5932] 00:32:27.424 bw ( KiB/s): min=17232, max=18432, per=26.91%, avg=17726.40, stdev=411.70, samples=10 00:32:27.424 iops : min= 2154, max= 2304, avg=2215.80, stdev=51.46, samples=10 00:32:27.424 lat (msec) : 2=0.12%, 4=87.22%, 10=12.66% 00:32:27.424 cpu : usr=97.26%, sys=2.50%, ctx=9, majf=0, minf=9 00:32:27.424 IO depths : 1=0.1%, 2=2.9%, 4=66.9%, 8=30.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.424 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.424 issued rwts: total=11087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:27.424 filename1: (groupid=0, jobs=1): err= 0: pid=927006: Mon Jul 15 15:37:36 2024 00:32:27.424 read: IOPS=2019, BW=15.8MiB/s (16.5MB/s)(78.9MiB/5003msec) 00:32:27.424 slat (nsec): min=5415, max=34860, avg=8392.98, stdev=2828.78 00:32:27.424 clat (usec): min=1988, max=45739, avg=3938.29, stdev=1335.01 00:32:27.424 lat (usec): min=1994, max=45765, avg=3946.69, stdev=1335.08 00:32:27.424 clat percentiles (usec): 00:32:27.424 | 1.00th=[ 2769], 5.00th=[ 3195], 10.00th=[ 3359], 20.00th=[ 3523], 00:32:27.424 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3785], 60.00th=[ 3851], 00:32:27.424 | 70.00th=[ 3916], 80.00th=[ 4146], 90.00th=[ 4752], 95.00th=[ 5473], 00:32:27.424 | 99.00th=[ 6063], 99.50th=[ 6390], 99.90th=[ 7308], 99.95th=[45876], 00:32:27.424 | 99.99th=[45876] 00:32:27.424 bw ( KiB/s): min=14896, max=16608, per=24.52%, avg=16152.00, stdev=471.89, samples=10 00:32:27.424 iops : min= 1862, max= 2076, avg=2019.00, stdev=58.99, samples=10 00:32:27.424 lat (msec) : 2=0.02%, 4=73.51%, 10=26.39%, 50=0.08% 00:32:27.424 cpu : usr=97.60%, sys=2.14%, ctx=8, majf=0, minf=9 00:32:27.424 IO depths : 1=0.2%, 2=0.5%, 4=72.7%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.424 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.424 issued rwts: total=10103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:27.424 filename1: (groupid=0, jobs=1): err= 0: pid=927007: Mon Jul 15 15:37:36 2024 00:32:27.424 read: IOPS=1973, BW=15.4MiB/s (16.2MB/s)(77.1MiB/5003msec) 00:32:27.424 slat (nsec): min=5410, max=28978, avg=6059.11, stdev=1648.58 00:32:27.424 clat (usec): min=2405, max=6966, avg=4035.52, stdev=719.46 00:32:27.424 lat (usec): min=2415, max=6972, avg=4041.58, stdev=719.37 00:32:27.424 clat percentiles (usec): 00:32:27.424 | 1.00th=[ 2737], 5.00th=[ 3195], 10.00th=[ 3392], 20.00th=[ 3556], 00:32:27.424 | 30.00th=[ 3654], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3884], 00:32:27.424 | 70.00th=[ 4146], 80.00th=[ 4490], 90.00th=[ 5276], 95.00th=[ 5604], 00:32:27.424 | 99.00th=[ 6128], 99.50th=[ 6259], 99.90th=[ 6652], 99.95th=[ 6849], 00:32:27.424 | 99.99th=[ 6980] 00:32:27.424 bw ( KiB/s): min=15344, max=16112, per=23.98%, avg=15792.00, stdev=216.64, samples=10 00:32:27.424 iops : min= 1918, max= 2014, avg=1974.00, stdev=27.08, samples=10 00:32:27.424 lat (msec) : 4=64.02%, 10=35.98% 00:32:27.424 cpu : usr=97.30%, sys=2.46%, ctx=8, majf=0, minf=9 00:32:27.424 IO depths : 1=0.8%, 2=1.8%, 4=70.3%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:27.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.425 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.425 issued rwts: total=9875,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.425 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:27.425 00:32:27.425 Run status group 0 (all jobs): 00:32:27.425 READ: bw=64.3MiB/s (67.4MB/s), 15.4MiB/s-17.3MiB/s (16.2MB/s-18.2MB/s), io=322MiB (337MB), run=5002-5003msec 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:27.686 15:37:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:27.687 15:37:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.687 15:37:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:27.687 15:37:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.687 15:37:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:27.687 15:37:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.687 15:37:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:27.687 15:37:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.687 00:32:27.687 real 0m24.390s 00:32:27.687 user 5m19.626s 00:32:27.687 sys 0m3.608s 00:32:27.687 15:37:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:27.687 15:37:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:27.687 ************************************ 00:32:27.687 END TEST fio_dif_rand_params 00:32:27.687 ************************************ 00:32:27.687 15:37:37 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:27.687 15:37:37 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:27.687 15:37:37 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:27.687 15:37:37 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:27.687 15:37:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:27.687 ************************************ 00:32:27.687 START TEST fio_dif_digest 00:32:27.687 ************************************ 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:27.687 bdev_null0 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:27.687 [2024-07-15 15:37:37.263647] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:27.687 { 00:32:27.687 "params": { 00:32:27.687 "name": "Nvme$subsystem", 00:32:27.687 "trtype": "$TEST_TRANSPORT", 00:32:27.687 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:27.687 "adrfam": "ipv4", 00:32:27.687 "trsvcid": "$NVMF_PORT", 00:32:27.687 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:27.687 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:27.687 "hdgst": ${hdgst:-false}, 00:32:27.687 "ddgst": ${ddgst:-false} 00:32:27.687 }, 00:32:27.687 "method": "bdev_nvme_attach_controller" 00:32:27.687 } 00:32:27.687 EOF 00:32:27.687 )") 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:32:27.687 15:37:37 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:27.687 "params": { 00:32:27.687 "name": "Nvme0", 00:32:27.687 "trtype": "tcp", 00:32:27.687 "traddr": "10.0.0.2", 00:32:27.687 "adrfam": "ipv4", 00:32:27.687 "trsvcid": "4420", 00:32:27.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:27.687 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:27.687 "hdgst": true, 00:32:27.687 "ddgst": true 00:32:27.687 }, 00:32:27.687 "method": "bdev_nvme_attach_controller" 00:32:27.687 }' 00:32:27.972 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:27.972 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:27.972 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:27.972 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:27.972 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:27.972 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:27.972 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:27.972 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:27.972 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:27.972 15:37:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:28.239 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:28.239 ... 00:32:28.239 fio-3.35 00:32:28.239 Starting 3 threads 00:32:28.239 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.542 00:32:40.542 filename0: (groupid=0, jobs=1): err= 0: pid=928515: Mon Jul 15 15:37:48 2024 00:32:40.542 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(267MiB/10044msec) 00:32:40.542 slat (nsec): min=5768, max=41442, avg=6605.87, stdev=1520.56 00:32:40.542 clat (usec): min=9205, max=51472, avg=14105.44, stdev=1622.44 00:32:40.542 lat (usec): min=9211, max=51482, avg=14112.05, stdev=1622.48 00:32:40.542 clat percentiles (usec): 00:32:40.542 | 1.00th=[11207], 5.00th=[12256], 10.00th=[12649], 20.00th=[13173], 00:32:40.542 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14091], 60.00th=[14353], 00:32:40.542 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15533], 95.00th=[16057], 00:32:40.542 | 99.00th=[16909], 99.50th=[17433], 99.90th=[19006], 99.95th=[51643], 00:32:40.542 | 99.99th=[51643] 00:32:40.542 bw ( KiB/s): min=26112, max=27904, per=32.83%, avg=27266.75, stdev=476.00, samples=20 00:32:40.542 iops : min= 204, max= 218, avg=213.00, stdev= 3.70, samples=20 00:32:40.542 lat (msec) : 10=0.14%, 20=99.77%, 100=0.09% 00:32:40.542 cpu : usr=95.77%, sys=4.00%, ctx=27, majf=0, minf=190 00:32:40.542 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:40.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.542 issued rwts: total=2132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.542 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:40.542 filename0: (groupid=0, jobs=1): err= 0: pid=928516: Mon Jul 15 15:37:48 2024 00:32:40.542 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(272MiB/10046msec) 00:32:40.542 slat (nsec): min=5793, max=31555, avg=6541.76, stdev=915.05 00:32:40.542 clat (usec): min=9242, max=48287, avg=13781.13, stdev=1286.50 00:32:40.542 lat (usec): min=9249, max=48293, avg=13787.67, stdev=1286.54 00:32:40.542 clat percentiles (usec): 00:32:40.542 | 1.00th=[11469], 5.00th=[12125], 10.00th=[12518], 20.00th=[12911], 00:32:40.542 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:32:40.543 | 70.00th=[14222], 80.00th=[14615], 90.00th=[15139], 95.00th=[15533], 00:32:40.543 | 99.00th=[16450], 99.50th=[16712], 99.90th=[17433], 99.95th=[17695], 00:32:40.543 | 99.99th=[48497] 00:32:40.543 bw ( KiB/s): min=27136, max=29184, per=33.57%, avg=27878.40, stdev=490.67, samples=20 00:32:40.543 iops : min= 212, max= 228, avg=217.80, stdev= 3.83, samples=20 00:32:40.543 lat (msec) : 10=0.37%, 20=99.59%, 50=0.05% 00:32:40.543 cpu : usr=95.88%, sys=3.90%, ctx=26, majf=0, minf=116 00:32:40.543 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:40.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.543 issued rwts: total=2179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.543 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:40.543 filename0: (groupid=0, jobs=1): err= 0: pid=928517: Mon Jul 15 15:37:48 2024 00:32:40.543 read: IOPS=219, BW=27.5MiB/s (28.8MB/s)(276MiB/10047msec) 00:32:40.543 slat (nsec): min=5782, max=35462, avg=6566.46, stdev=1243.78 00:32:40.543 clat (usec): min=9305, max=54637, avg=13628.57, stdev=2158.96 00:32:40.543 lat (usec): min=9311, max=54643, avg=13635.14, stdev=2158.96 00:32:40.543 clat percentiles (usec): 00:32:40.543 | 1.00th=[10945], 5.00th=[11863], 10.00th=[12256], 20.00th=[12649], 00:32:40.543 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:32:40.543 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14877], 95.00th=[15401], 00:32:40.543 | 99.00th=[15926], 99.50th=[16450], 99.90th=[53740], 99.95th=[54264], 00:32:40.543 | 99.99th=[54789] 00:32:40.543 bw ( KiB/s): min=25344, max=29440, per=33.99%, avg=28224.00, stdev=787.40, samples=20 00:32:40.543 iops : min= 198, max= 230, avg=220.50, stdev= 6.15, samples=20 00:32:40.543 lat (msec) : 10=0.09%, 20=99.68%, 100=0.23% 00:32:40.543 cpu : usr=95.20%, sys=4.56%, ctx=21, majf=0, minf=101 00:32:40.543 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:40.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.543 issued rwts: total=2207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.543 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:40.543 00:32:40.543 Run status group 0 (all jobs): 00:32:40.543 READ: bw=81.1MiB/s (85.0MB/s), 26.5MiB/s-27.5MiB/s (27.8MB/s-28.8MB/s), io=815MiB (854MB), run=10044-10047msec 00:32:40.543 15:37:48 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:40.543 15:37:48 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:40.543 15:37:48 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:40.543 15:37:48 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:40.543 15:37:48 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:40.543 15:37:48 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:40.543 15:37:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.543 15:37:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:40.543 15:37:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.543 15:37:48 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:40.543 15:37:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.543 15:37:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:40.543 15:37:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.543 00:32:40.543 real 0m11.219s 00:32:40.543 user 0m42.172s 00:32:40.543 sys 0m1.605s 00:32:40.543 15:37:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:40.543 15:37:48 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:40.543 ************************************ 00:32:40.543 END TEST fio_dif_digest 00:32:40.543 ************************************ 00:32:40.543 15:37:48 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:40.543 15:37:48 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:40.543 15:37:48 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:40.543 15:37:48 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:40.543 15:37:48 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:32:40.543 15:37:48 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:40.543 15:37:48 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:32:40.543 15:37:48 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:40.543 15:37:48 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:40.543 rmmod nvme_tcp 00:32:40.543 rmmod nvme_fabrics 00:32:40.543 rmmod nvme_keyring 00:32:40.543 15:37:48 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:40.543 15:37:48 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:32:40.543 15:37:48 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:32:40.543 15:37:48 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 918041 ']' 00:32:40.543 15:37:48 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 918041 00:32:40.543 15:37:48 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 918041 ']' 00:32:40.543 15:37:48 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 918041 00:32:40.543 15:37:48 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:32:40.543 15:37:48 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:40.543 15:37:48 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 918041 00:32:40.543 15:37:48 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:40.543 15:37:48 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:40.543 15:37:48 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 918041' 00:32:40.543 killing process with pid 918041 00:32:40.543 15:37:48 nvmf_dif -- common/autotest_common.sh@967 -- # kill 918041 00:32:40.543 15:37:48 nvmf_dif -- common/autotest_common.sh@972 -- # wait 918041 00:32:40.543 15:37:48 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:40.543 15:37:48 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:43.090 Waiting for block devices as requested 00:32:43.090 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:43.090 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:43.090 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:43.090 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:43.090 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:43.090 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:43.090 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:43.351 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:43.351 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:43.611 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:43.611 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:43.611 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:43.611 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:43.871 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:43.871 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:43.871 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:43.871 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:44.131 15:37:53 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:44.132 15:37:53 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:44.132 15:37:53 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:44.132 15:37:53 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:44.132 15:37:53 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.132 15:37:53 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:44.132 15:37:53 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.044 15:37:55 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:46.044 00:32:46.044 real 1m17.197s 00:32:46.044 user 8m4.625s 00:32:46.044 sys 0m19.306s 00:32:46.044 15:37:55 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:46.044 15:37:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:46.044 ************************************ 00:32:46.044 END TEST nvmf_dif 00:32:46.044 ************************************ 00:32:46.044 15:37:55 -- common/autotest_common.sh@1142 -- # return 0 00:32:46.044 15:37:55 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:46.044 15:37:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:46.044 15:37:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:46.044 15:37:55 -- common/autotest_common.sh@10 -- # set +x 00:32:46.304 ************************************ 00:32:46.304 START TEST nvmf_abort_qd_sizes 00:32:46.304 ************************************ 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:46.304 * Looking for test storage... 00:32:46.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:46.304 15:37:55 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:32:46.305 15:37:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:54.446 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:54.446 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:54.446 Found net devices under 0000:31:00.0: cvl_0_0 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:54.446 Found net devices under 0000:31:00.1: cvl_0_1 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:54.446 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:54.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:54.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:32:54.447 00:32:54.447 --- 10.0.0.2 ping statistics --- 00:32:54.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:54.447 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:54.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:54.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:32:54.447 00:32:54.447 --- 10.0.0.1 ping statistics --- 00:32:54.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:54.447 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:54.447 15:38:03 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:57.770 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:57.770 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:57.770 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:57.770 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:57.770 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:57.770 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:57.770 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:57.770 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:57.770 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:57.770 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:57.770 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:57.770 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:57.770 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:57.770 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:57.770 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:57.770 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:57.770 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=938319 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 938319 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 938319 ']' 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:57.770 15:38:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:57.770 [2024-07-15 15:38:07.026680] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:32:57.770 [2024-07-15 15:38:07.026725] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.770 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.770 [2024-07-15 15:38:07.095015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:57.770 [2024-07-15 15:38:07.161336] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:57.770 [2024-07-15 15:38:07.161373] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:57.770 [2024-07-15 15:38:07.161380] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:57.770 [2024-07-15 15:38:07.161387] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:57.770 [2024-07-15 15:38:07.161392] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:57.770 [2024-07-15 15:38:07.161504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.770 [2024-07-15 15:38:07.161640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:57.770 [2024-07-15 15:38:07.161797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.770 [2024-07-15 15:38:07.161798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:58.341 15:38:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:58.341 ************************************ 00:32:58.341 START TEST spdk_target_abort 00:32:58.341 ************************************ 00:32:58.341 15:38:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:32:58.341 15:38:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:58.341 15:38:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:32:58.341 15:38:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.341 15:38:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:58.602 spdk_targetn1 00:32:58.602 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.602 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:58.602 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.602 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:58.602 [2024-07-15 15:38:08.220949] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:58.863 [2024-07-15 15:38:08.261203] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:58.863 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:58.864 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:58.864 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:58.864 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:58.864 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:58.864 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:58.864 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:58.864 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:58.864 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:58.864 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:58.864 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:58.864 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:58.864 15:38:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:58.864 EAL: No free 2048 kB hugepages reported on node 1 00:32:58.864 [2024-07-15 15:38:08.429398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:288 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:32:58.864 [2024-07-15 15:38:08.429424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0025 p:1 m:0 dnr:0 00:32:58.864 [2024-07-15 15:38:08.444367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:768 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:32:58.864 [2024-07-15 15:38:08.444384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0062 p:1 m:0 dnr:0 00:32:59.124 [2024-07-15 15:38:08.524321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2176 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:32:59.124 [2024-07-15 15:38:08.524337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:59.124 [2024-07-15 15:38:08.548377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3056 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:32:59.124 [2024-07-15 15:38:08.548392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:02.420 Initializing NVMe Controllers 00:33:02.420 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:02.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:02.420 Initialization complete. Launching workers. 00:33:02.420 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12615, failed: 4 00:33:02.420 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3573, failed to submit 9046 00:33:02.420 success 764, unsuccess 2809, failed 0 00:33:02.420 15:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:02.420 15:38:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:02.420 EAL: No free 2048 kB hugepages reported on node 1 00:33:02.420 [2024-07-15 15:38:11.817039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:2848 len:8 PRP1 0x200007c40000 PRP2 0x0 00:33:02.420 [2024-07-15 15:38:11.817075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:02.420 [2024-07-15 15:38:11.985039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:6784 len:8 PRP1 0x200007c44000 PRP2 0x0 00:33:02.420 [2024-07-15 15:38:11.985065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:03.034 [2024-07-15 15:38:12.472020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:17920 len:8 PRP1 0x200007c4e000 PRP2 0x0 00:33:03.034 [2024-07-15 15:38:12.472055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:00c1 p:1 m:0 dnr:0 00:33:03.321 [2024-07-15 15:38:12.877129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:27008 len:8 PRP1 0x200007c44000 PRP2 0x0 00:33:03.321 [2024-07-15 15:38:12.877156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:05.232 Initializing NVMe Controllers 00:33:05.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:05.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:05.232 Initialization complete. Launching workers. 00:33:05.232 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8486, failed: 4 00:33:05.232 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1248, failed to submit 7242 00:33:05.232 success 325, unsuccess 923, failed 0 00:33:05.232 15:38:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:05.232 15:38:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:05.493 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.798 [2024-07-15 15:38:17.677794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:174 nsid:1 lba:303096 len:8 PRP1 0x2000078f0000 PRP2 0x0 00:33:08.798 [2024-07-15 15:38:17.677822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:174 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:08.798 Initializing NVMe Controllers 00:33:08.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:08.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:08.798 Initialization complete. Launching workers. 00:33:08.798 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42344, failed: 1 00:33:08.798 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2529, failed to submit 39816 00:33:08.798 success 600, unsuccess 1929, failed 0 00:33:08.798 15:38:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:08.798 15:38:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.798 15:38:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:08.798 15:38:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.798 15:38:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:08.798 15:38:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.798 15:38:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:10.714 15:38:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.714 15:38:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 938319 00:33:10.714 15:38:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 938319 ']' 00:33:10.714 15:38:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 938319 00:33:10.714 15:38:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:33:10.714 15:38:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:10.714 15:38:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 938319 00:33:10.714 15:38:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:10.714 15:38:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:10.714 15:38:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 938319' 00:33:10.714 killing process with pid 938319 00:33:10.714 15:38:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 938319 00:33:10.714 15:38:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 938319 00:33:10.714 00:33:10.714 real 0m12.166s 00:33:10.714 user 0m49.704s 00:33:10.714 sys 0m1.663s 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:10.714 ************************************ 00:33:10.714 END TEST spdk_target_abort 00:33:10.714 ************************************ 00:33:10.714 15:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:33:10.714 15:38:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:10.714 15:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:10.714 15:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:10.714 15:38:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:10.714 ************************************ 00:33:10.714 START TEST kernel_target_abort 00:33:10.714 ************************************ 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:10.714 15:38:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:14.019 Waiting for block devices as requested 00:33:14.279 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:14.279 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:14.279 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:14.279 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:14.539 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:14.539 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:14.539 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:14.800 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:14.800 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:15.060 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:15.060 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:15.061 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:15.061 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:15.322 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:15.322 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:15.322 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:15.322 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:15.582 15:38:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:33:15.582 15:38:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:15.582 15:38:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:33:15.582 15:38:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:33:15.582 15:38:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:15.582 15:38:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:33:15.582 15:38:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:33:15.582 15:38:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:15.582 15:38:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:15.582 No valid GPT data, bailing 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 --hostid=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 -a 10.0.0.1 -t tcp -s 4420 00:33:15.582 00:33:15.582 Discovery Log Number of Records 2, Generation counter 2 00:33:15.582 =====Discovery Log Entry 0====== 00:33:15.582 trtype: tcp 00:33:15.582 adrfam: ipv4 00:33:15.582 subtype: current discovery subsystem 00:33:15.582 treq: not specified, sq flow control disable supported 00:33:15.582 portid: 1 00:33:15.582 trsvcid: 4420 00:33:15.582 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:15.582 traddr: 10.0.0.1 00:33:15.582 eflags: none 00:33:15.582 sectype: none 00:33:15.582 =====Discovery Log Entry 1====== 00:33:15.582 trtype: tcp 00:33:15.582 adrfam: ipv4 00:33:15.582 subtype: nvme subsystem 00:33:15.582 treq: not specified, sq flow control disable supported 00:33:15.582 portid: 1 00:33:15.582 trsvcid: 4420 00:33:15.582 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:15.582 traddr: 10.0.0.1 00:33:15.582 eflags: none 00:33:15.582 sectype: none 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:15.582 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:15.583 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:15.583 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:15.583 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:15.583 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:15.583 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:15.583 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:15.583 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:15.583 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:15.583 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:15.583 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:15.583 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:15.583 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:15.583 15:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:15.583 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.879 Initializing NVMe Controllers 00:33:18.879 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:18.879 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:18.879 Initialization complete. Launching workers. 00:33:18.879 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66050, failed: 0 00:33:18.879 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66050, failed to submit 0 00:33:18.879 success 0, unsuccess 66050, failed 0 00:33:18.879 15:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:18.879 15:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:18.879 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.175 Initializing NVMe Controllers 00:33:22.175 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:22.175 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:22.175 Initialization complete. Launching workers. 00:33:22.175 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107316, failed: 0 00:33:22.175 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27018, failed to submit 80298 00:33:22.175 success 0, unsuccess 27018, failed 0 00:33:22.175 15:38:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:22.175 15:38:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:22.175 EAL: No free 2048 kB hugepages reported on node 1 00:33:24.715 Initializing NVMe Controllers 00:33:24.715 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:24.715 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:24.715 Initialization complete. Launching workers. 00:33:24.715 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 102729, failed: 0 00:33:24.715 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25682, failed to submit 77047 00:33:24.715 success 0, unsuccess 25682, failed 0 00:33:24.715 15:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:24.715 15:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:24.715 15:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:33:24.715 15:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:24.715 15:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:24.715 15:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:24.715 15:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:24.715 15:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:24.715 15:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:24.996 15:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:28.316 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:28.316 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:28.577 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:28.577 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:28.577 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:28.577 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:28.577 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:28.577 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:28.577 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:28.577 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:28.577 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:28.577 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:28.577 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:28.577 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:28.577 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:28.577 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:30.488 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:30.488 00:33:30.488 real 0m19.752s 00:33:30.488 user 0m9.578s 00:33:30.488 sys 0m5.920s 00:33:30.488 15:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:30.488 15:38:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:30.488 ************************************ 00:33:30.488 END TEST kernel_target_abort 00:33:30.488 ************************************ 00:33:30.488 15:38:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:33:30.488 15:38:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:30.488 15:38:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:30.488 15:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:30.488 15:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:33:30.488 15:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:30.488 15:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:33:30.488 15:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:30.488 15:38:39 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:30.488 rmmod nvme_tcp 00:33:30.488 rmmod nvme_fabrics 00:33:30.488 rmmod nvme_keyring 00:33:30.488 15:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:30.488 15:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:33:30.488 15:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:33:30.488 15:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 938319 ']' 00:33:30.488 15:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 938319 00:33:30.488 15:38:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 938319 ']' 00:33:30.488 15:38:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 938319 00:33:30.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (938319) - No such process 00:33:30.488 15:38:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 938319 is not found' 00:33:30.488 Process with pid 938319 is not found 00:33:30.488 15:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:30.488 15:38:40 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:34.697 Waiting for block devices as requested 00:33:34.697 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:34.697 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:34.697 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:34.697 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:34.697 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:34.697 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:34.697 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:34.697 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:34.697 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:34.959 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:34.959 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:34.959 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:35.219 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:35.219 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:35.219 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:35.219 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:35.481 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:35.481 15:38:44 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:35.481 15:38:44 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:35.481 15:38:44 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:35.481 15:38:44 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:35.481 15:38:44 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.481 15:38:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:35.481 15:38:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:37.396 15:38:46 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:37.396 00:33:37.396 real 0m51.319s 00:33:37.396 user 1m4.357s 00:33:37.396 sys 0m18.488s 00:33:37.396 15:38:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:37.396 15:38:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:37.396 ************************************ 00:33:37.396 END TEST nvmf_abort_qd_sizes 00:33:37.396 ************************************ 00:33:37.659 15:38:47 -- common/autotest_common.sh@1142 -- # return 0 00:33:37.659 15:38:47 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:37.659 15:38:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:37.659 15:38:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:37.659 15:38:47 -- common/autotest_common.sh@10 -- # set +x 00:33:37.659 ************************************ 00:33:37.659 START TEST keyring_file 00:33:37.659 ************************************ 00:33:37.659 15:38:47 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:37.659 * Looking for test storage... 00:33:37.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:37.659 15:38:47 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.659 15:38:47 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:37.659 15:38:47 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.659 15:38:47 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.659 15:38:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.659 15:38:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.659 15:38:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.659 15:38:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:37.659 15:38:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@47 -- # : 0 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:37.659 15:38:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:37.659 15:38:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:37.659 15:38:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:37.659 15:38:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:37.659 15:38:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:37.659 15:38:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.I5a3O3AnQq 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.I5a3O3AnQq 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.I5a3O3AnQq 00:33:37.659 15:38:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.I5a3O3AnQq 00:33:37.659 15:38:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lN9G41TiDO 00:33:37.659 15:38:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:37.659 15:38:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:37.921 15:38:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lN9G41TiDO 00:33:37.921 15:38:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lN9G41TiDO 00:33:37.921 15:38:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.lN9G41TiDO 00:33:37.921 15:38:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=948512 00:33:37.921 15:38:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 948512 00:33:37.921 15:38:47 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:37.921 15:38:47 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 948512 ']' 00:33:37.921 15:38:47 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.921 15:38:47 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:37.921 15:38:47 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.921 15:38:47 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:37.921 15:38:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:37.921 [2024-07-15 15:38:47.355737] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:33:37.921 [2024-07-15 15:38:47.355815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948512 ] 00:33:37.921 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.921 [2024-07-15 15:38:47.426583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.921 [2024-07-15 15:38:47.501148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:38.864 15:38:48 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:38.864 [2024-07-15 15:38:48.126716] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:38.864 null0 00:33:38.864 [2024-07-15 15:38:48.158756] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:38.864 [2024-07-15 15:38:48.158987] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:38.864 [2024-07-15 15:38:48.166763] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.864 15:38:48 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:38.864 [2024-07-15 15:38:48.178793] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:38.864 request: 00:33:38.864 { 00:33:38.864 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:38.864 "secure_channel": false, 00:33:38.864 "listen_address": { 00:33:38.864 "trtype": "tcp", 00:33:38.864 "traddr": "127.0.0.1", 00:33:38.864 "trsvcid": "4420" 00:33:38.864 }, 00:33:38.864 "method": "nvmf_subsystem_add_listener", 00:33:38.864 "req_id": 1 00:33:38.864 } 00:33:38.864 Got JSON-RPC error response 00:33:38.864 response: 00:33:38.864 { 00:33:38.864 "code": -32602, 00:33:38.864 "message": "Invalid parameters" 00:33:38.864 } 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:38.864 15:38:48 keyring_file -- keyring/file.sh@46 -- # bperfpid=948756 00:33:38.864 15:38:48 keyring_file -- keyring/file.sh@48 -- # waitforlisten 948756 /var/tmp/bperf.sock 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 948756 ']' 00:33:38.864 15:38:48 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:38.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:38.864 15:38:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:38.864 [2024-07-15 15:38:48.236220] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:33:38.864 [2024-07-15 15:38:48.236267] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948756 ] 00:33:38.864 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.864 [2024-07-15 15:38:48.297386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.864 [2024-07-15 15:38:48.361046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.434 15:38:48 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:39.434 15:38:48 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:39.434 15:38:48 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.I5a3O3AnQq 00:33:39.435 15:38:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.I5a3O3AnQq 00:33:39.694 15:38:49 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lN9G41TiDO 00:33:39.694 15:38:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lN9G41TiDO 00:33:40.107 15:38:49 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:33:40.107 15:38:49 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:33:40.107 15:38:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.107 15:38:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.107 15:38:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:40.107 15:38:49 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.I5a3O3AnQq == \/\t\m\p\/\t\m\p\.\I\5\a\3\O\3\A\n\Q\q ]] 00:33:40.107 15:38:49 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:40.107 15:38:49 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:33:40.107 15:38:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.107 15:38:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.107 15:38:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:40.107 15:38:49 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.lN9G41TiDO == \/\t\m\p\/\t\m\p\.\l\N\9\G\4\1\T\i\D\O ]] 00:33:40.107 15:38:49 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:33:40.107 15:38:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:40.107 15:38:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.107 15:38:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.107 15:38:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.107 15:38:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:40.373 15:38:49 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:40.373 15:38:49 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:33:40.373 15:38:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:40.373 15:38:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.373 15:38:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.373 15:38:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.373 15:38:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:40.373 15:38:49 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:40.373 15:38:49 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:40.373 15:38:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:40.688 [2024-07-15 15:38:50.109539] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:40.688 nvme0n1 00:33:40.688 15:38:50 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:33:40.688 15:38:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:40.688 15:38:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.688 15:38:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.688 15:38:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.688 15:38:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:40.948 15:38:50 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:40.948 15:38:50 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:33:40.948 15:38:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:40.948 15:38:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.948 15:38:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.948 15:38:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.948 15:38:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:40.948 15:38:50 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:40.948 15:38:50 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:41.207 Running I/O for 1 seconds... 00:33:42.145 00:33:42.145 Latency(us) 00:33:42.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.145 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:42.145 nvme0n1 : 1.00 13694.46 53.49 0.00 0.00 9321.18 4778.67 21189.97 00:33:42.145 =================================================================================================================== 00:33:42.145 Total : 13694.46 53.49 0.00 0.00 9321.18 4778.67 21189.97 00:33:42.145 0 00:33:42.145 15:38:51 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:42.145 15:38:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:42.405 15:38:51 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:42.405 15:38:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:42.405 15:38:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:42.405 15:38:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:42.405 15:38:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:42.405 15:38:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:42.405 15:38:51 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:42.405 15:38:51 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:42.405 15:38:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:42.405 15:38:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:42.405 15:38:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:42.405 15:38:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:42.405 15:38:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:42.666 15:38:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:42.666 15:38:52 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:42.666 15:38:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:42.666 15:38:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:42.666 15:38:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:42.666 15:38:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:42.666 15:38:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:42.666 15:38:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:42.666 15:38:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:42.666 15:38:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:42.666 [2024-07-15 15:38:52.244190] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:42.666 [2024-07-15 15:38:52.244889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1163240 (107): Transport endpoint is not connected 00:33:42.666 [2024-07-15 15:38:52.245880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1163240 (9): Bad file descriptor 00:33:42.666 [2024-07-15 15:38:52.246881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:42.666 [2024-07-15 15:38:52.246892] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:42.666 [2024-07-15 15:38:52.246899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:42.666 request: 00:33:42.666 { 00:33:42.666 "name": "nvme0", 00:33:42.666 "trtype": "tcp", 00:33:42.666 "traddr": "127.0.0.1", 00:33:42.666 "adrfam": "ipv4", 00:33:42.666 "trsvcid": "4420", 00:33:42.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:42.666 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:42.666 "prchk_reftag": false, 00:33:42.666 "prchk_guard": false, 00:33:42.666 "hdgst": false, 00:33:42.666 "ddgst": false, 00:33:42.666 "psk": "key1", 00:33:42.666 "method": "bdev_nvme_attach_controller", 00:33:42.666 "req_id": 1 00:33:42.666 } 00:33:42.666 Got JSON-RPC error response 00:33:42.666 response: 00:33:42.666 { 00:33:42.666 "code": -5, 00:33:42.666 "message": "Input/output error" 00:33:42.666 } 00:33:42.666 15:38:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:42.666 15:38:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:42.666 15:38:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:42.666 15:38:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:42.666 15:38:52 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:42.666 15:38:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:42.666 15:38:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:42.666 15:38:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:42.666 15:38:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:42.666 15:38:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:42.926 15:38:52 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:42.926 15:38:52 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:42.926 15:38:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:42.926 15:38:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:42.926 15:38:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:42.926 15:38:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:42.926 15:38:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:43.186 15:38:52 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:43.186 15:38:52 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:43.186 15:38:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:43.186 15:38:52 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:43.186 15:38:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:43.447 15:38:52 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:43.447 15:38:52 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:43.447 15:38:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:43.447 15:38:53 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:43.447 15:38:53 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.I5a3O3AnQq 00:33:43.447 15:38:53 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.I5a3O3AnQq 00:33:43.447 15:38:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:43.447 15:38:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.I5a3O3AnQq 00:33:43.447 15:38:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:43.447 15:38:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:43.447 15:38:53 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:43.447 15:38:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:43.447 15:38:53 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.I5a3O3AnQq 00:33:43.447 15:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.I5a3O3AnQq 00:33:43.707 [2024-07-15 15:38:53.199574] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.I5a3O3AnQq': 0100660 00:33:43.707 [2024-07-15 15:38:53.199594] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:43.707 request: 00:33:43.707 { 00:33:43.707 "name": "key0", 00:33:43.707 "path": "/tmp/tmp.I5a3O3AnQq", 00:33:43.707 "method": "keyring_file_add_key", 00:33:43.707 "req_id": 1 00:33:43.707 } 00:33:43.707 Got JSON-RPC error response 00:33:43.707 response: 00:33:43.707 { 00:33:43.707 "code": -1, 00:33:43.707 "message": "Operation not permitted" 00:33:43.707 } 00:33:43.707 15:38:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:43.707 15:38:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:43.707 15:38:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:43.707 15:38:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:43.707 15:38:53 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.I5a3O3AnQq 00:33:43.707 15:38:53 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.I5a3O3AnQq 00:33:43.707 15:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.I5a3O3AnQq 00:33:43.967 15:38:53 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.I5a3O3AnQq 00:33:43.967 15:38:53 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:43.967 15:38:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:43.967 15:38:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:43.967 15:38:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:43.967 15:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:43.967 15:38:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:43.967 15:38:53 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:43.967 15:38:53 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:43.967 15:38:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:43.967 15:38:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:43.967 15:38:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:43.967 15:38:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:43.967 15:38:53 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:43.967 15:38:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:43.967 15:38:53 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:43.967 15:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:44.228 [2024-07-15 15:38:53.692894] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.I5a3O3AnQq': No such file or directory 00:33:44.228 [2024-07-15 15:38:53.692912] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:44.228 [2024-07-15 15:38:53.692933] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:44.228 [2024-07-15 15:38:53.692940] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:44.228 [2024-07-15 15:38:53.692946] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:44.228 request: 00:33:44.228 { 00:33:44.228 "name": "nvme0", 00:33:44.228 "trtype": "tcp", 00:33:44.228 "traddr": "127.0.0.1", 00:33:44.228 "adrfam": "ipv4", 00:33:44.228 "trsvcid": "4420", 00:33:44.228 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:44.228 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:44.228 "prchk_reftag": false, 00:33:44.228 "prchk_guard": false, 00:33:44.228 "hdgst": false, 00:33:44.228 "ddgst": false, 00:33:44.228 "psk": "key0", 00:33:44.228 "method": "bdev_nvme_attach_controller", 00:33:44.228 "req_id": 1 00:33:44.228 } 00:33:44.228 Got JSON-RPC error response 00:33:44.228 response: 00:33:44.228 { 00:33:44.228 "code": -19, 00:33:44.228 "message": "No such device" 00:33:44.228 } 00:33:44.228 15:38:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:44.228 15:38:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:44.228 15:38:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:44.228 15:38:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:44.228 15:38:53 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:44.228 15:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:44.489 15:38:53 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:44.489 15:38:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:44.489 15:38:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:44.489 15:38:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:44.489 15:38:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:44.489 15:38:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:44.489 15:38:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kpWjEDTHX4 00:33:44.489 15:38:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:44.489 15:38:53 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:44.489 15:38:53 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:44.489 15:38:53 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:44.489 15:38:53 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:44.489 15:38:53 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:44.489 15:38:53 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:44.489 15:38:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kpWjEDTHX4 00:33:44.489 15:38:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kpWjEDTHX4 00:33:44.489 15:38:53 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.kpWjEDTHX4 00:33:44.489 15:38:53 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kpWjEDTHX4 00:33:44.489 15:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kpWjEDTHX4 00:33:44.489 15:38:54 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:44.489 15:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:44.749 nvme0n1 00:33:44.749 15:38:54 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:44.749 15:38:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:44.749 15:38:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:44.749 15:38:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:44.749 15:38:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:44.749 15:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:45.009 15:38:54 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:45.009 15:38:54 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:45.009 15:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:45.270 15:38:54 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:45.270 15:38:54 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:45.270 15:38:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:45.270 15:38:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:45.270 15:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:45.270 15:38:54 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:45.270 15:38:54 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:45.270 15:38:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:45.270 15:38:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:45.270 15:38:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:45.270 15:38:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:45.270 15:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:45.530 15:38:54 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:45.530 15:38:54 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:45.530 15:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:45.530 15:38:55 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:45.530 15:38:55 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:45.530 15:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:45.791 15:38:55 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:45.791 15:38:55 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kpWjEDTHX4 00:33:45.791 15:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kpWjEDTHX4 00:33:46.051 15:38:55 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lN9G41TiDO 00:33:46.051 15:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lN9G41TiDO 00:33:46.051 15:38:55 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:46.051 15:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:46.310 nvme0n1 00:33:46.310 15:38:55 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:46.310 15:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:46.570 15:38:56 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:46.570 "subsystems": [ 00:33:46.570 { 00:33:46.570 "subsystem": "keyring", 00:33:46.570 "config": [ 00:33:46.570 { 00:33:46.570 "method": "keyring_file_add_key", 00:33:46.570 "params": { 00:33:46.570 "name": "key0", 00:33:46.570 "path": "/tmp/tmp.kpWjEDTHX4" 00:33:46.570 } 00:33:46.570 }, 00:33:46.570 { 00:33:46.570 "method": "keyring_file_add_key", 00:33:46.570 "params": { 00:33:46.570 "name": "key1", 00:33:46.570 "path": "/tmp/tmp.lN9G41TiDO" 00:33:46.570 } 00:33:46.570 } 00:33:46.570 ] 00:33:46.570 }, 00:33:46.570 { 00:33:46.570 "subsystem": "iobuf", 00:33:46.570 "config": [ 00:33:46.570 { 00:33:46.570 "method": "iobuf_set_options", 00:33:46.570 "params": { 00:33:46.570 "small_pool_count": 8192, 00:33:46.570 "large_pool_count": 1024, 00:33:46.570 "small_bufsize": 8192, 00:33:46.570 "large_bufsize": 135168 00:33:46.570 } 00:33:46.570 } 00:33:46.570 ] 00:33:46.570 }, 00:33:46.570 { 00:33:46.570 "subsystem": "sock", 00:33:46.570 "config": [ 00:33:46.570 { 00:33:46.570 "method": "sock_set_default_impl", 00:33:46.570 "params": { 00:33:46.570 "impl_name": "posix" 00:33:46.570 } 00:33:46.570 }, 00:33:46.570 { 00:33:46.570 "method": "sock_impl_set_options", 00:33:46.570 "params": { 00:33:46.570 "impl_name": "ssl", 00:33:46.570 "recv_buf_size": 4096, 00:33:46.570 "send_buf_size": 4096, 00:33:46.570 "enable_recv_pipe": true, 00:33:46.570 "enable_quickack": false, 00:33:46.570 "enable_placement_id": 0, 00:33:46.570 "enable_zerocopy_send_server": true, 00:33:46.570 "enable_zerocopy_send_client": false, 00:33:46.570 "zerocopy_threshold": 0, 00:33:46.570 "tls_version": 0, 00:33:46.570 "enable_ktls": false 00:33:46.570 } 00:33:46.570 }, 00:33:46.570 { 00:33:46.570 "method": "sock_impl_set_options", 00:33:46.570 "params": { 00:33:46.570 "impl_name": "posix", 00:33:46.570 "recv_buf_size": 2097152, 00:33:46.570 "send_buf_size": 2097152, 00:33:46.570 "enable_recv_pipe": true, 00:33:46.570 "enable_quickack": false, 00:33:46.570 "enable_placement_id": 0, 00:33:46.570 "enable_zerocopy_send_server": true, 00:33:46.570 "enable_zerocopy_send_client": false, 00:33:46.570 "zerocopy_threshold": 0, 00:33:46.570 "tls_version": 0, 00:33:46.570 "enable_ktls": false 00:33:46.570 } 00:33:46.570 } 00:33:46.570 ] 00:33:46.570 }, 00:33:46.570 { 00:33:46.570 "subsystem": "vmd", 00:33:46.570 "config": [] 00:33:46.570 }, 00:33:46.570 { 00:33:46.570 "subsystem": "accel", 00:33:46.570 "config": [ 00:33:46.570 { 00:33:46.570 "method": "accel_set_options", 00:33:46.570 "params": { 00:33:46.570 "small_cache_size": 128, 00:33:46.570 "large_cache_size": 16, 00:33:46.570 "task_count": 2048, 00:33:46.570 "sequence_count": 2048, 00:33:46.570 "buf_count": 2048 00:33:46.570 } 00:33:46.570 } 00:33:46.570 ] 00:33:46.570 }, 00:33:46.570 { 00:33:46.570 "subsystem": "bdev", 00:33:46.570 "config": [ 00:33:46.570 { 00:33:46.570 "method": "bdev_set_options", 00:33:46.570 "params": { 00:33:46.570 "bdev_io_pool_size": 65535, 00:33:46.570 "bdev_io_cache_size": 256, 00:33:46.570 "bdev_auto_examine": true, 00:33:46.570 "iobuf_small_cache_size": 128, 00:33:46.570 "iobuf_large_cache_size": 16 00:33:46.570 } 00:33:46.570 }, 00:33:46.570 { 00:33:46.570 "method": "bdev_raid_set_options", 00:33:46.570 "params": { 00:33:46.570 "process_window_size_kb": 1024 00:33:46.570 } 00:33:46.570 }, 00:33:46.570 { 00:33:46.570 "method": "bdev_iscsi_set_options", 00:33:46.570 "params": { 00:33:46.570 "timeout_sec": 30 00:33:46.570 } 00:33:46.570 }, 00:33:46.570 { 00:33:46.570 "method": "bdev_nvme_set_options", 00:33:46.570 "params": { 00:33:46.570 "action_on_timeout": "none", 00:33:46.570 "timeout_us": 0, 00:33:46.570 "timeout_admin_us": 0, 00:33:46.570 "keep_alive_timeout_ms": 10000, 00:33:46.570 "arbitration_burst": 0, 00:33:46.570 "low_priority_weight": 0, 00:33:46.570 "medium_priority_weight": 0, 00:33:46.570 "high_priority_weight": 0, 00:33:46.570 "nvme_adminq_poll_period_us": 10000, 00:33:46.570 "nvme_ioq_poll_period_us": 0, 00:33:46.570 "io_queue_requests": 512, 00:33:46.570 "delay_cmd_submit": true, 00:33:46.570 "transport_retry_count": 4, 00:33:46.570 "bdev_retry_count": 3, 00:33:46.571 "transport_ack_timeout": 0, 00:33:46.571 "ctrlr_loss_timeout_sec": 0, 00:33:46.571 "reconnect_delay_sec": 0, 00:33:46.571 "fast_io_fail_timeout_sec": 0, 00:33:46.571 "disable_auto_failback": false, 00:33:46.571 "generate_uuids": false, 00:33:46.571 "transport_tos": 0, 00:33:46.571 "nvme_error_stat": false, 00:33:46.571 "rdma_srq_size": 0, 00:33:46.571 "io_path_stat": false, 00:33:46.571 "allow_accel_sequence": false, 00:33:46.571 "rdma_max_cq_size": 0, 00:33:46.571 "rdma_cm_event_timeout_ms": 0, 00:33:46.571 "dhchap_digests": [ 00:33:46.571 "sha256", 00:33:46.571 "sha384", 00:33:46.571 "sha512" 00:33:46.571 ], 00:33:46.571 "dhchap_dhgroups": [ 00:33:46.571 "null", 00:33:46.571 "ffdhe2048", 00:33:46.571 "ffdhe3072", 00:33:46.571 "ffdhe4096", 00:33:46.571 "ffdhe6144", 00:33:46.571 "ffdhe8192" 00:33:46.571 ] 00:33:46.571 } 00:33:46.571 }, 00:33:46.571 { 00:33:46.571 "method": "bdev_nvme_attach_controller", 00:33:46.571 "params": { 00:33:46.571 "name": "nvme0", 00:33:46.571 "trtype": "TCP", 00:33:46.571 "adrfam": "IPv4", 00:33:46.571 "traddr": "127.0.0.1", 00:33:46.571 "trsvcid": "4420", 00:33:46.571 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:46.571 "prchk_reftag": false, 00:33:46.571 "prchk_guard": false, 00:33:46.571 "ctrlr_loss_timeout_sec": 0, 00:33:46.571 "reconnect_delay_sec": 0, 00:33:46.571 "fast_io_fail_timeout_sec": 0, 00:33:46.571 "psk": "key0", 00:33:46.571 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:46.571 "hdgst": false, 00:33:46.571 "ddgst": false 00:33:46.571 } 00:33:46.571 }, 00:33:46.571 { 00:33:46.571 "method": "bdev_nvme_set_hotplug", 00:33:46.571 "params": { 00:33:46.571 "period_us": 100000, 00:33:46.571 "enable": false 00:33:46.571 } 00:33:46.571 }, 00:33:46.571 { 00:33:46.571 "method": "bdev_wait_for_examine" 00:33:46.571 } 00:33:46.571 ] 00:33:46.571 }, 00:33:46.571 { 00:33:46.571 "subsystem": "nbd", 00:33:46.571 "config": [] 00:33:46.571 } 00:33:46.571 ] 00:33:46.571 }' 00:33:46.571 15:38:56 keyring_file -- keyring/file.sh@114 -- # killprocess 948756 00:33:46.571 15:38:56 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 948756 ']' 00:33:46.571 15:38:56 keyring_file -- common/autotest_common.sh@952 -- # kill -0 948756 00:33:46.571 15:38:56 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:46.571 15:38:56 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:46.571 15:38:56 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 948756 00:33:46.571 15:38:56 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:46.571 15:38:56 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:46.571 15:38:56 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 948756' 00:33:46.571 killing process with pid 948756 00:33:46.571 15:38:56 keyring_file -- common/autotest_common.sh@967 -- # kill 948756 00:33:46.571 Received shutdown signal, test time was about 1.000000 seconds 00:33:46.571 00:33:46.571 Latency(us) 00:33:46.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.571 =================================================================================================================== 00:33:46.571 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:46.571 15:38:56 keyring_file -- common/autotest_common.sh@972 -- # wait 948756 00:33:46.858 15:38:56 keyring_file -- keyring/file.sh@117 -- # bperfpid=950297 00:33:46.858 15:38:56 keyring_file -- keyring/file.sh@119 -- # waitforlisten 950297 /var/tmp/bperf.sock 00:33:46.858 15:38:56 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 950297 ']' 00:33:46.858 15:38:56 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:46.858 15:38:56 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:46.858 15:38:56 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:46.858 15:38:56 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:46.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:46.858 15:38:56 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:46.858 15:38:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:46.858 15:38:56 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:46.858 "subsystems": [ 00:33:46.858 { 00:33:46.858 "subsystem": "keyring", 00:33:46.858 "config": [ 00:33:46.858 { 00:33:46.858 "method": "keyring_file_add_key", 00:33:46.858 "params": { 00:33:46.858 "name": "key0", 00:33:46.858 "path": "/tmp/tmp.kpWjEDTHX4" 00:33:46.858 } 00:33:46.858 }, 00:33:46.858 { 00:33:46.858 "method": "keyring_file_add_key", 00:33:46.858 "params": { 00:33:46.858 "name": "key1", 00:33:46.858 "path": "/tmp/tmp.lN9G41TiDO" 00:33:46.858 } 00:33:46.858 } 00:33:46.858 ] 00:33:46.858 }, 00:33:46.858 { 00:33:46.858 "subsystem": "iobuf", 00:33:46.858 "config": [ 00:33:46.858 { 00:33:46.858 "method": "iobuf_set_options", 00:33:46.858 "params": { 00:33:46.858 "small_pool_count": 8192, 00:33:46.858 "large_pool_count": 1024, 00:33:46.858 "small_bufsize": 8192, 00:33:46.858 "large_bufsize": 135168 00:33:46.858 } 00:33:46.858 } 00:33:46.858 ] 00:33:46.858 }, 00:33:46.858 { 00:33:46.858 "subsystem": "sock", 00:33:46.858 "config": [ 00:33:46.858 { 00:33:46.858 "method": "sock_set_default_impl", 00:33:46.858 "params": { 00:33:46.858 "impl_name": "posix" 00:33:46.858 } 00:33:46.858 }, 00:33:46.858 { 00:33:46.858 "method": "sock_impl_set_options", 00:33:46.858 "params": { 00:33:46.858 "impl_name": "ssl", 00:33:46.858 "recv_buf_size": 4096, 00:33:46.858 "send_buf_size": 4096, 00:33:46.858 "enable_recv_pipe": true, 00:33:46.858 "enable_quickack": false, 00:33:46.858 "enable_placement_id": 0, 00:33:46.858 "enable_zerocopy_send_server": true, 00:33:46.858 "enable_zerocopy_send_client": false, 00:33:46.858 "zerocopy_threshold": 0, 00:33:46.858 "tls_version": 0, 00:33:46.858 "enable_ktls": false 00:33:46.858 } 00:33:46.858 }, 00:33:46.858 { 00:33:46.858 "method": "sock_impl_set_options", 00:33:46.858 "params": { 00:33:46.858 "impl_name": "posix", 00:33:46.858 "recv_buf_size": 2097152, 00:33:46.858 "send_buf_size": 2097152, 00:33:46.858 "enable_recv_pipe": true, 00:33:46.858 "enable_quickack": false, 00:33:46.858 "enable_placement_id": 0, 00:33:46.858 "enable_zerocopy_send_server": true, 00:33:46.858 "enable_zerocopy_send_client": false, 00:33:46.858 "zerocopy_threshold": 0, 00:33:46.858 "tls_version": 0, 00:33:46.858 "enable_ktls": false 00:33:46.858 } 00:33:46.858 } 00:33:46.858 ] 00:33:46.858 }, 00:33:46.858 { 00:33:46.858 "subsystem": "vmd", 00:33:46.858 "config": [] 00:33:46.858 }, 00:33:46.858 { 00:33:46.858 "subsystem": "accel", 00:33:46.858 "config": [ 00:33:46.858 { 00:33:46.858 "method": "accel_set_options", 00:33:46.858 "params": { 00:33:46.858 "small_cache_size": 128, 00:33:46.858 "large_cache_size": 16, 00:33:46.858 "task_count": 2048, 00:33:46.858 "sequence_count": 2048, 00:33:46.858 "buf_count": 2048 00:33:46.858 } 00:33:46.858 } 00:33:46.858 ] 00:33:46.858 }, 00:33:46.858 { 00:33:46.858 "subsystem": "bdev", 00:33:46.858 "config": [ 00:33:46.858 { 00:33:46.858 "method": "bdev_set_options", 00:33:46.858 "params": { 00:33:46.858 "bdev_io_pool_size": 65535, 00:33:46.858 "bdev_io_cache_size": 256, 00:33:46.858 "bdev_auto_examine": true, 00:33:46.858 "iobuf_small_cache_size": 128, 00:33:46.858 "iobuf_large_cache_size": 16 00:33:46.858 } 00:33:46.858 }, 00:33:46.858 { 00:33:46.858 "method": "bdev_raid_set_options", 00:33:46.858 "params": { 00:33:46.858 "process_window_size_kb": 1024 00:33:46.858 } 00:33:46.858 }, 00:33:46.858 { 00:33:46.858 "method": "bdev_iscsi_set_options", 00:33:46.858 "params": { 00:33:46.858 "timeout_sec": 30 00:33:46.858 } 00:33:46.858 }, 00:33:46.858 { 00:33:46.858 "method": "bdev_nvme_set_options", 00:33:46.858 "params": { 00:33:46.858 "action_on_timeout": "none", 00:33:46.858 "timeout_us": 0, 00:33:46.858 "timeout_admin_us": 0, 00:33:46.858 "keep_alive_timeout_ms": 10000, 00:33:46.858 "arbitration_burst": 0, 00:33:46.858 "low_priority_weight": 0, 00:33:46.858 "medium_priority_weight": 0, 00:33:46.858 "high_priority_weight": 0, 00:33:46.858 "nvme_adminq_poll_period_us": 10000, 00:33:46.858 "nvme_ioq_poll_period_us": 0, 00:33:46.858 "io_queue_requests": 512, 00:33:46.858 "delay_cmd_submit": true, 00:33:46.858 "transport_retry_count": 4, 00:33:46.858 "bdev_retry_count": 3, 00:33:46.858 "transport_ack_timeout": 0, 00:33:46.858 "ctrlr_loss_timeout_sec": 0, 00:33:46.858 "reconnect_delay_sec": 0, 00:33:46.858 "fast_io_fail_timeout_sec": 0, 00:33:46.858 "disable_auto_failback": false, 00:33:46.858 "generate_uuids": false, 00:33:46.858 "transport_tos": 0, 00:33:46.858 "nvme_error_stat": false, 00:33:46.858 "rdma_srq_size": 0, 00:33:46.858 "io_path_stat": false, 00:33:46.858 "allow_accel_sequence": false, 00:33:46.858 "rdma_max_cq_size": 0, 00:33:46.858 "rdma_cm_event_timeout_ms": 0, 00:33:46.858 "dhchap_digests": [ 00:33:46.858 "sha256", 00:33:46.858 "sha384", 00:33:46.858 "sha512" 00:33:46.858 ], 00:33:46.858 "dhchap_dhgroups": [ 00:33:46.858 "null", 00:33:46.858 "ffdhe2048", 00:33:46.858 "ffdhe3072", 00:33:46.858 "ffdhe4096", 00:33:46.858 "ffdhe6144", 00:33:46.858 "ffdhe8192" 00:33:46.858 ] 00:33:46.858 } 00:33:46.858 }, 00:33:46.858 { 00:33:46.858 "method": "bdev_nvme_attach_controller", 00:33:46.858 "params": { 00:33:46.858 "name": "nvme0", 00:33:46.858 "trtype": "TCP", 00:33:46.858 "adrfam": "IPv4", 00:33:46.858 "traddr": "127.0.0.1", 00:33:46.858 "trsvcid": "4420", 00:33:46.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:46.858 "prchk_reftag": false, 00:33:46.858 "prchk_guard": false, 00:33:46.858 "ctrlr_loss_timeout_sec": 0, 00:33:46.858 "reconnect_delay_sec": 0, 00:33:46.858 "fast_io_fail_timeout_sec": 0, 00:33:46.858 "psk": "key0", 00:33:46.858 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:46.858 "hdgst": false, 00:33:46.858 "ddgst": false 00:33:46.859 } 00:33:46.859 }, 00:33:46.859 { 00:33:46.859 "method": "bdev_nvme_set_hotplug", 00:33:46.859 "params": { 00:33:46.859 "period_us": 100000, 00:33:46.859 "enable": false 00:33:46.859 } 00:33:46.859 }, 00:33:46.859 { 00:33:46.859 "method": "bdev_wait_for_examine" 00:33:46.859 } 00:33:46.859 ] 00:33:46.859 }, 00:33:46.859 { 00:33:46.859 "subsystem": "nbd", 00:33:46.859 "config": [] 00:33:46.859 } 00:33:46.859 ] 00:33:46.859 }' 00:33:46.859 [2024-07-15 15:38:56.277129] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:33:46.859 [2024-07-15 15:38:56.277181] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950297 ] 00:33:46.859 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.859 [2024-07-15 15:38:56.340116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.859 [2024-07-15 15:38:56.404126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.118 [2024-07-15 15:38:56.550787] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:47.688 15:38:57 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:47.688 15:38:57 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:47.688 15:38:57 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:47.688 15:38:57 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:47.688 15:38:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:47.688 15:38:57 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:47.688 15:38:57 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:47.688 15:38:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:47.688 15:38:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:47.688 15:38:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:47.688 15:38:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:47.688 15:38:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:47.948 15:38:57 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:47.948 15:38:57 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:47.948 15:38:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:47.948 15:38:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:47.948 15:38:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:47.948 15:38:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:47.948 15:38:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:47.948 15:38:57 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:47.948 15:38:57 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:47.948 15:38:57 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:47.948 15:38:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:48.209 15:38:57 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:48.209 15:38:57 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:48.209 15:38:57 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.kpWjEDTHX4 /tmp/tmp.lN9G41TiDO 00:33:48.209 15:38:57 keyring_file -- keyring/file.sh@20 -- # killprocess 950297 00:33:48.209 15:38:57 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 950297 ']' 00:33:48.209 15:38:57 keyring_file -- common/autotest_common.sh@952 -- # kill -0 950297 00:33:48.209 15:38:57 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:48.209 15:38:57 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:48.209 15:38:57 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 950297 00:33:48.209 15:38:57 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:48.209 15:38:57 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:48.209 15:38:57 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 950297' 00:33:48.209 killing process with pid 950297 00:33:48.209 15:38:57 keyring_file -- common/autotest_common.sh@967 -- # kill 950297 00:33:48.209 Received shutdown signal, test time was about 1.000000 seconds 00:33:48.209 00:33:48.209 Latency(us) 00:33:48.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.209 =================================================================================================================== 00:33:48.209 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:48.209 15:38:57 keyring_file -- common/autotest_common.sh@972 -- # wait 950297 00:33:48.470 15:38:57 keyring_file -- keyring/file.sh@21 -- # killprocess 948512 00:33:48.470 15:38:57 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 948512 ']' 00:33:48.470 15:38:57 keyring_file -- common/autotest_common.sh@952 -- # kill -0 948512 00:33:48.470 15:38:57 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:48.470 15:38:57 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:48.470 15:38:57 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 948512 00:33:48.470 15:38:57 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:48.470 15:38:57 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:48.470 15:38:57 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 948512' 00:33:48.470 killing process with pid 948512 00:33:48.470 15:38:57 keyring_file -- common/autotest_common.sh@967 -- # kill 948512 00:33:48.470 [2024-07-15 15:38:57.913058] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:48.470 15:38:57 keyring_file -- common/autotest_common.sh@972 -- # wait 948512 00:33:48.731 00:33:48.731 real 0m11.074s 00:33:48.731 user 0m26.430s 00:33:48.731 sys 0m2.572s 00:33:48.731 15:38:58 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:48.731 15:38:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:48.731 ************************************ 00:33:48.731 END TEST keyring_file 00:33:48.731 ************************************ 00:33:48.731 15:38:58 -- common/autotest_common.sh@1142 -- # return 0 00:33:48.731 15:38:58 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:33:48.731 15:38:58 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:48.731 15:38:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:48.731 15:38:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:48.731 15:38:58 -- common/autotest_common.sh@10 -- # set +x 00:33:48.731 ************************************ 00:33:48.731 START TEST keyring_linux 00:33:48.731 ************************************ 00:33:48.731 15:38:58 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:48.731 * Looking for test storage... 00:33:48.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:48.731 15:38:58 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:48.731 15:38:58 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=008c5ac1-5feb-ec11-9bc7-a4bf019282a6 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:48.731 15:38:58 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.731 15:38:58 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.731 15:38:58 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.731 15:38:58 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.731 15:38:58 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.731 15:38:58 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.731 15:38:58 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:48.731 15:38:58 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:48.731 15:38:58 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:48.731 15:38:58 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:48.731 15:38:58 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:48.731 15:38:58 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:48.731 15:38:58 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:48.731 15:38:58 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:48.731 15:38:58 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:48.731 15:38:58 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:48.731 15:38:58 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:48.731 15:38:58 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:48.731 15:38:58 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:48.731 15:38:58 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:48.731 15:38:58 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:48.731 15:38:58 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:48.992 15:38:58 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:48.992 15:38:58 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:48.992 /tmp/:spdk-test:key0 00:33:48.992 15:38:58 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:48.992 15:38:58 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:48.992 15:38:58 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:48.992 15:38:58 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:48.992 15:38:58 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:48.992 15:38:58 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:48.992 15:38:58 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:48.992 15:38:58 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:48.992 15:38:58 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:48.993 15:38:58 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:48.993 15:38:58 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:48.993 15:38:58 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:48.993 15:38:58 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:48.993 15:38:58 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:48.993 15:38:58 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:48.993 /tmp/:spdk-test:key1 00:33:48.993 15:38:58 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=950903 00:33:48.993 15:38:58 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 950903 00:33:48.993 15:38:58 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:48.993 15:38:58 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 950903 ']' 00:33:48.993 15:38:58 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.993 15:38:58 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:48.993 15:38:58 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.993 15:38:58 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:48.993 15:38:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:48.993 [2024-07-15 15:38:58.487296] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:33:48.993 [2024-07-15 15:38:58.487371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950903 ] 00:33:48.993 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.993 [2024-07-15 15:38:58.554700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.253 [2024-07-15 15:38:58.629812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.826 15:38:59 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:49.826 15:38:59 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:49.826 15:38:59 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:49.826 15:38:59 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.826 15:38:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:49.826 [2024-07-15 15:38:59.247544] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:49.826 null0 00:33:49.826 [2024-07-15 15:38:59.279584] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:49.826 [2024-07-15 15:38:59.279958] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:49.826 15:38:59 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.826 15:38:59 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:49.826 619327174 00:33:49.826 15:38:59 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:49.826 270569864 00:33:49.826 15:38:59 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:49.826 15:38:59 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=951004 00:33:49.826 15:38:59 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 951004 /var/tmp/bperf.sock 00:33:49.826 15:38:59 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 951004 ']' 00:33:49.826 15:38:59 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:49.826 15:38:59 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:49.826 15:38:59 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:49.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:49.826 15:38:59 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:49.826 15:38:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:49.826 [2024-07-15 15:38:59.337248] Starting SPDK v24.09-pre git sha1 248c547d0 / DPDK 24.03.0 initialization... 00:33:49.826 [2024-07-15 15:38:59.337294] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951004 ] 00:33:49.826 EAL: No free 2048 kB hugepages reported on node 1 00:33:49.826 [2024-07-15 15:38:59.398423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.086 [2024-07-15 15:38:59.462382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.654 15:39:00 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:50.654 15:39:00 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:50.654 15:39:00 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:50.654 15:39:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:50.654 15:39:00 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:50.654 15:39:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:50.914 15:39:00 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:50.914 15:39:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:51.173 [2024-07-15 15:39:00.605903] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:51.173 nvme0n1 00:33:51.173 15:39:00 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:51.173 15:39:00 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:51.173 15:39:00 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:51.173 15:39:00 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:51.173 15:39:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:51.173 15:39:00 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:51.433 15:39:00 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:51.433 15:39:00 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:51.433 15:39:00 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:51.433 15:39:00 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:51.433 15:39:00 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:51.433 15:39:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:51.433 15:39:00 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:51.433 15:39:00 keyring_linux -- keyring/linux.sh@25 -- # sn=619327174 00:33:51.433 15:39:00 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:51.433 15:39:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:51.433 15:39:01 keyring_linux -- keyring/linux.sh@26 -- # [[ 619327174 == \6\1\9\3\2\7\1\7\4 ]] 00:33:51.433 15:39:01 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 619327174 00:33:51.433 15:39:01 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:51.433 15:39:01 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:51.693 Running I/O for 1 seconds... 00:33:52.632 00:33:52.632 Latency(us) 00:33:52.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.632 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:52.632 nvme0n1 : 1.01 14330.28 55.98 0.00 0.00 8887.04 6908.59 16930.13 00:33:52.632 =================================================================================================================== 00:33:52.632 Total : 14330.28 55.98 0.00 0.00 8887.04 6908.59 16930.13 00:33:52.632 0 00:33:52.632 15:39:02 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:52.632 15:39:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:52.891 15:39:02 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:52.891 15:39:02 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:52.891 15:39:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:52.891 15:39:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:52.891 15:39:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:52.891 15:39:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:52.891 15:39:02 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:52.891 15:39:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:52.891 15:39:02 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:52.892 15:39:02 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:52.892 15:39:02 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:33:52.892 15:39:02 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:52.892 15:39:02 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:52.892 15:39:02 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:52.892 15:39:02 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:52.892 15:39:02 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:52.892 15:39:02 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:52.892 15:39:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:53.151 [2024-07-15 15:39:02.580115] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:53.151 [2024-07-15 15:39:02.580608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125a590 (107): Transport endpoint is not connected 00:33:53.151 [2024-07-15 15:39:02.581603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125a590 (9): Bad file descriptor 00:33:53.151 [2024-07-15 15:39:02.582605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:53.151 [2024-07-15 15:39:02.582613] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:53.151 [2024-07-15 15:39:02.582621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:53.151 request: 00:33:53.151 { 00:33:53.151 "name": "nvme0", 00:33:53.151 "trtype": "tcp", 00:33:53.151 "traddr": "127.0.0.1", 00:33:53.151 "adrfam": "ipv4", 00:33:53.151 "trsvcid": "4420", 00:33:53.151 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:53.151 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:53.151 "prchk_reftag": false, 00:33:53.151 "prchk_guard": false, 00:33:53.151 "hdgst": false, 00:33:53.151 "ddgst": false, 00:33:53.151 "psk": ":spdk-test:key1", 00:33:53.151 "method": "bdev_nvme_attach_controller", 00:33:53.151 "req_id": 1 00:33:53.151 } 00:33:53.151 Got JSON-RPC error response 00:33:53.151 response: 00:33:53.151 { 00:33:53.151 "code": -5, 00:33:53.151 "message": "Input/output error" 00:33:53.151 } 00:33:53.151 15:39:02 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:33:53.151 15:39:02 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:53.151 15:39:02 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:53.151 15:39:02 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:53.151 15:39:02 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:53.151 15:39:02 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:53.151 15:39:02 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:53.151 15:39:02 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:53.151 15:39:02 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:53.151 15:39:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:53.151 15:39:02 keyring_linux -- keyring/linux.sh@33 -- # sn=619327174 00:33:53.151 15:39:02 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 619327174 00:33:53.151 1 links removed 00:33:53.151 15:39:02 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:53.151 15:39:02 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:53.151 15:39:02 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:53.151 15:39:02 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:53.151 15:39:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:53.151 15:39:02 keyring_linux -- keyring/linux.sh@33 -- # sn=270569864 00:33:53.151 15:39:02 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 270569864 00:33:53.151 1 links removed 00:33:53.151 15:39:02 keyring_linux -- keyring/linux.sh@41 -- # killprocess 951004 00:33:53.151 15:39:02 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 951004 ']' 00:33:53.151 15:39:02 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 951004 00:33:53.151 15:39:02 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:53.151 15:39:02 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:53.151 15:39:02 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 951004 00:33:53.151 15:39:02 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:53.151 15:39:02 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:53.151 15:39:02 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 951004' 00:33:53.151 killing process with pid 951004 00:33:53.151 15:39:02 keyring_linux -- common/autotest_common.sh@967 -- # kill 951004 00:33:53.151 Received shutdown signal, test time was about 1.000000 seconds 00:33:53.151 00:33:53.151 Latency(us) 00:33:53.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.151 =================================================================================================================== 00:33:53.151 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:53.151 15:39:02 keyring_linux -- common/autotest_common.sh@972 -- # wait 951004 00:33:53.410 15:39:02 keyring_linux -- keyring/linux.sh@42 -- # killprocess 950903 00:33:53.410 15:39:02 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 950903 ']' 00:33:53.410 15:39:02 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 950903 00:33:53.410 15:39:02 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:53.410 15:39:02 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:53.410 15:39:02 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 950903 00:33:53.410 15:39:02 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:53.410 15:39:02 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:53.410 15:39:02 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 950903' 00:33:53.410 killing process with pid 950903 00:33:53.410 15:39:02 keyring_linux -- common/autotest_common.sh@967 -- # kill 950903 00:33:53.410 15:39:02 keyring_linux -- common/autotest_common.sh@972 -- # wait 950903 00:33:53.670 00:33:53.670 real 0m4.870s 00:33:53.670 user 0m8.813s 00:33:53.670 sys 0m1.356s 00:33:53.670 15:39:03 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:53.670 15:39:03 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:53.670 ************************************ 00:33:53.670 END TEST keyring_linux 00:33:53.670 ************************************ 00:33:53.670 15:39:03 -- common/autotest_common.sh@1142 -- # return 0 00:33:53.670 15:39:03 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:53.670 15:39:03 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:53.670 15:39:03 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:53.670 15:39:03 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:53.670 15:39:03 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:53.670 15:39:03 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:53.670 15:39:03 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:53.670 15:39:03 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:53.670 15:39:03 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:53.670 15:39:03 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:53.670 15:39:03 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:53.670 15:39:03 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:53.670 15:39:03 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:53.670 15:39:03 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:53.670 15:39:03 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:53.670 15:39:03 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:53.670 15:39:03 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:53.670 15:39:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:53.670 15:39:03 -- common/autotest_common.sh@10 -- # set +x 00:33:53.670 15:39:03 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:53.670 15:39:03 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:53.670 15:39:03 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:53.670 15:39:03 -- common/autotest_common.sh@10 -- # set +x 00:34:01.805 INFO: APP EXITING 00:34:01.805 INFO: killing all VMs 00:34:01.805 INFO: killing vhost app 00:34:01.805 INFO: EXIT DONE 00:34:04.350 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:34:04.350 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:34:04.350 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:34:04.350 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:34:04.350 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:34:04.350 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:34:04.350 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:34:04.350 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:34:04.350 0000:65:00.0 (144d a80a): Already using the nvme driver 00:34:04.610 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:34:04.610 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:34:04.610 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:34:04.610 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:34:04.610 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:34:04.610 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:34:04.610 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:34:04.610 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:34:08.804 Cleaning 00:34:08.804 Removing: /var/run/dpdk/spdk0/config 00:34:08.804 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:08.804 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:08.804 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:08.804 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:08.804 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:08.804 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:08.804 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:08.804 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:08.804 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:08.804 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:08.804 Removing: /var/run/dpdk/spdk1/config 00:34:08.804 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:08.804 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:08.804 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:08.804 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:08.804 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:08.804 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:08.804 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:08.804 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:08.804 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:08.804 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:08.804 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:08.804 Removing: /var/run/dpdk/spdk2/config 00:34:08.804 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:08.804 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:08.804 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:08.804 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:08.804 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:08.804 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:08.804 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:08.804 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:08.804 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:08.804 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:08.804 Removing: /var/run/dpdk/spdk3/config 00:34:08.804 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:08.804 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:08.804 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:08.804 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:08.804 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:08.804 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:08.804 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:08.804 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:08.805 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:08.805 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:08.805 Removing: /var/run/dpdk/spdk4/config 00:34:08.805 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:08.805 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:08.805 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:08.805 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:08.805 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:08.805 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:08.805 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:08.805 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:08.805 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:08.805 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:08.805 Removing: /dev/shm/bdev_svc_trace.1 00:34:08.805 Removing: /dev/shm/nvmf_trace.0 00:34:08.805 Removing: /dev/shm/spdk_tgt_trace.pid476889 00:34:08.805 Removing: /var/run/dpdk/spdk0 00:34:08.805 Removing: /var/run/dpdk/spdk1 00:34:08.805 Removing: /var/run/dpdk/spdk2 00:34:08.805 Removing: /var/run/dpdk/spdk3 00:34:08.805 Removing: /var/run/dpdk/spdk4 00:34:08.805 Removing: /var/run/dpdk/spdk_pid475269 00:34:08.805 Removing: /var/run/dpdk/spdk_pid476889 00:34:08.805 Removing: /var/run/dpdk/spdk_pid477415 00:34:08.805 Removing: /var/run/dpdk/spdk_pid478573 00:34:08.805 Removing: /var/run/dpdk/spdk_pid478791 00:34:08.805 Removing: /var/run/dpdk/spdk_pid480066 00:34:08.805 Removing: /var/run/dpdk/spdk_pid480192 00:34:08.805 Removing: /var/run/dpdk/spdk_pid480574 00:34:08.805 Removing: /var/run/dpdk/spdk_pid481442 00:34:08.805 Removing: /var/run/dpdk/spdk_pid482212 00:34:08.805 Removing: /var/run/dpdk/spdk_pid482597 00:34:08.805 Removing: /var/run/dpdk/spdk_pid482884 00:34:08.805 Removing: /var/run/dpdk/spdk_pid483179 00:34:08.805 Removing: /var/run/dpdk/spdk_pid483468 00:34:08.805 Removing: /var/run/dpdk/spdk_pid483820 00:34:08.805 Removing: /var/run/dpdk/spdk_pid484178 00:34:08.805 Removing: /var/run/dpdk/spdk_pid484511 00:34:08.805 Removing: /var/run/dpdk/spdk_pid485599 00:34:08.805 Removing: /var/run/dpdk/spdk_pid489430 00:34:08.805 Removing: /var/run/dpdk/spdk_pid489796 00:34:08.805 Removing: /var/run/dpdk/spdk_pid490093 00:34:08.805 Removing: /var/run/dpdk/spdk_pid490173 00:34:08.805 Removing: /var/run/dpdk/spdk_pid490591 00:34:08.805 Removing: /var/run/dpdk/spdk_pid490879 00:34:08.805 Removing: /var/run/dpdk/spdk_pid491259 00:34:08.805 Removing: /var/run/dpdk/spdk_pid491572 00:34:08.805 Removing: /var/run/dpdk/spdk_pid491816 00:34:08.805 Removing: /var/run/dpdk/spdk_pid491968 00:34:08.805 Removing: /var/run/dpdk/spdk_pid492278 00:34:08.805 Removing: /var/run/dpdk/spdk_pid492344 00:34:08.805 Removing: /var/run/dpdk/spdk_pid492783 00:34:08.805 Removing: /var/run/dpdk/spdk_pid493133 00:34:08.805 Removing: /var/run/dpdk/spdk_pid493521 00:34:08.805 Removing: /var/run/dpdk/spdk_pid493807 00:34:08.805 Removing: /var/run/dpdk/spdk_pid493917 00:34:08.805 Removing: /var/run/dpdk/spdk_pid493979 00:34:08.805 Removing: /var/run/dpdk/spdk_pid494334 00:34:08.805 Removing: /var/run/dpdk/spdk_pid494697 00:34:08.805 Removing: /var/run/dpdk/spdk_pid494965 00:34:08.805 Removing: /var/run/dpdk/spdk_pid495146 00:34:08.805 Removing: /var/run/dpdk/spdk_pid495436 00:34:08.805 Removing: /var/run/dpdk/spdk_pid495785 00:34:08.805 Removing: /var/run/dpdk/spdk_pid496138 00:34:08.805 Removing: /var/run/dpdk/spdk_pid496474 00:34:08.805 Removing: /var/run/dpdk/spdk_pid496661 00:34:08.805 Removing: /var/run/dpdk/spdk_pid496879 00:34:08.805 Removing: /var/run/dpdk/spdk_pid497226 00:34:08.805 Removing: /var/run/dpdk/spdk_pid497576 00:34:08.805 Removing: /var/run/dpdk/spdk_pid497916 00:34:08.805 Removing: /var/run/dpdk/spdk_pid498110 00:34:08.805 Removing: /var/run/dpdk/spdk_pid498320 00:34:08.805 Removing: /var/run/dpdk/spdk_pid498672 00:34:08.805 Removing: /var/run/dpdk/spdk_pid499024 00:34:08.805 Removing: /var/run/dpdk/spdk_pid499374 00:34:08.805 Removing: /var/run/dpdk/spdk_pid499608 00:34:08.805 Removing: /var/run/dpdk/spdk_pid499812 00:34:08.805 Removing: /var/run/dpdk/spdk_pid500147 00:34:08.805 Removing: /var/run/dpdk/spdk_pid500462 00:34:08.805 Removing: /var/run/dpdk/spdk_pid505118 00:34:08.805 Removing: /var/run/dpdk/spdk_pid561590 00:34:08.805 Removing: /var/run/dpdk/spdk_pid567054 00:34:08.805 Removing: /var/run/dpdk/spdk_pid579196 00:34:08.805 Removing: /var/run/dpdk/spdk_pid585915 00:34:08.805 Removing: /var/run/dpdk/spdk_pid591160 00:34:08.805 Removing: /var/run/dpdk/spdk_pid591842 00:34:08.805 Removing: /var/run/dpdk/spdk_pid599816 00:34:08.805 Removing: /var/run/dpdk/spdk_pid607300 00:34:08.805 Removing: /var/run/dpdk/spdk_pid607386 00:34:08.805 Removing: /var/run/dpdk/spdk_pid608419 00:34:08.805 Removing: /var/run/dpdk/spdk_pid609457 00:34:08.805 Removing: /var/run/dpdk/spdk_pid610538 00:34:08.805 Removing: /var/run/dpdk/spdk_pid611172 00:34:08.805 Removing: /var/run/dpdk/spdk_pid611260 00:34:08.805 Removing: /var/run/dpdk/spdk_pid611539 00:34:08.805 Removing: /var/run/dpdk/spdk_pid611607 00:34:08.805 Removing: /var/run/dpdk/spdk_pid611610 00:34:08.805 Removing: /var/run/dpdk/spdk_pid612615 00:34:08.805 Removing: /var/run/dpdk/spdk_pid613619 00:34:08.805 Removing: /var/run/dpdk/spdk_pid614673 00:34:08.805 Removing: /var/run/dpdk/spdk_pid615309 00:34:08.805 Removing: /var/run/dpdk/spdk_pid615436 00:34:08.805 Removing: /var/run/dpdk/spdk_pid615685 00:34:08.805 Removing: /var/run/dpdk/spdk_pid617072 00:34:08.805 Removing: /var/run/dpdk/spdk_pid618458 00:34:08.805 Removing: /var/run/dpdk/spdk_pid628694 00:34:08.805 Removing: /var/run/dpdk/spdk_pid629051 00:34:08.805 Removing: /var/run/dpdk/spdk_pid634350 00:34:08.805 Removing: /var/run/dpdk/spdk_pid641597 00:34:08.805 Removing: /var/run/dpdk/spdk_pid645247 00:34:08.805 Removing: /var/run/dpdk/spdk_pid658160 00:34:08.805 Removing: /var/run/dpdk/spdk_pid669618 00:34:08.805 Removing: /var/run/dpdk/spdk_pid671632 00:34:08.805 Removing: /var/run/dpdk/spdk_pid672659 00:34:08.805 Removing: /var/run/dpdk/spdk_pid693949 00:34:08.805 Removing: /var/run/dpdk/spdk_pid698855 00:34:08.805 Removing: /var/run/dpdk/spdk_pid730667 00:34:08.805 Removing: /var/run/dpdk/spdk_pid736271 00:34:08.805 Removing: /var/run/dpdk/spdk_pid738124 00:34:08.805 Removing: /var/run/dpdk/spdk_pid740407 00:34:08.805 Removing: /var/run/dpdk/spdk_pid740614 00:34:08.805 Removing: /var/run/dpdk/spdk_pid740789 00:34:08.805 Removing: /var/run/dpdk/spdk_pid741190 00:34:08.805 Removing: /var/run/dpdk/spdk_pid741905 00:34:08.805 Removing: /var/run/dpdk/spdk_pid744399 00:34:08.805 Removing: /var/run/dpdk/spdk_pid745477 00:34:08.805 Removing: /var/run/dpdk/spdk_pid745957 00:34:08.805 Removing: /var/run/dpdk/spdk_pid748528 00:34:08.805 Removing: /var/run/dpdk/spdk_pid749279 00:34:08.805 Removing: /var/run/dpdk/spdk_pid749998 00:34:08.805 Removing: /var/run/dpdk/spdk_pid755278 00:34:08.805 Removing: /var/run/dpdk/spdk_pid767965 00:34:08.805 Removing: /var/run/dpdk/spdk_pid772774 00:34:08.805 Removing: /var/run/dpdk/spdk_pid780539 00:34:08.805 Removing: /var/run/dpdk/spdk_pid782030 00:34:08.805 Removing: /var/run/dpdk/spdk_pid783869 00:34:08.805 Removing: /var/run/dpdk/spdk_pid789176 00:34:08.805 Removing: /var/run/dpdk/spdk_pid794543 00:34:08.805 Removing: /var/run/dpdk/spdk_pid804626 00:34:08.805 Removing: /var/run/dpdk/spdk_pid804713 00:34:08.805 Removing: /var/run/dpdk/spdk_pid810016 00:34:08.805 Removing: /var/run/dpdk/spdk_pid810230 00:34:08.805 Removing: /var/run/dpdk/spdk_pid810464 00:34:09.064 Removing: /var/run/dpdk/spdk_pid811114 00:34:09.064 Removing: /var/run/dpdk/spdk_pid811119 00:34:09.064 Removing: /var/run/dpdk/spdk_pid817012 00:34:09.064 Removing: /var/run/dpdk/spdk_pid817547 00:34:09.064 Removing: /var/run/dpdk/spdk_pid823266 00:34:09.064 Removing: /var/run/dpdk/spdk_pid826447 00:34:09.064 Removing: /var/run/dpdk/spdk_pid833215 00:34:09.064 Removing: /var/run/dpdk/spdk_pid839991 00:34:09.064 Removing: /var/run/dpdk/spdk_pid850232 00:34:09.064 Removing: /var/run/dpdk/spdk_pid859634 00:34:09.064 Removing: /var/run/dpdk/spdk_pid859637 00:34:09.064 Removing: /var/run/dpdk/spdk_pid882511 00:34:09.064 Removing: /var/run/dpdk/spdk_pid883346 00:34:09.064 Removing: /var/run/dpdk/spdk_pid884159 00:34:09.064 Removing: /var/run/dpdk/spdk_pid884861 00:34:09.064 Removing: /var/run/dpdk/spdk_pid885921 00:34:09.064 Removing: /var/run/dpdk/spdk_pid886609 00:34:09.064 Removing: /var/run/dpdk/spdk_pid887286 00:34:09.064 Removing: /var/run/dpdk/spdk_pid887977 00:34:09.064 Removing: /var/run/dpdk/spdk_pid893419 00:34:09.064 Removing: /var/run/dpdk/spdk_pid893616 00:34:09.064 Removing: /var/run/dpdk/spdk_pid901145 00:34:09.064 Removing: /var/run/dpdk/spdk_pid901375 00:34:09.064 Removing: /var/run/dpdk/spdk_pid904243 00:34:09.064 Removing: /var/run/dpdk/spdk_pid912027 00:34:09.064 Removing: /var/run/dpdk/spdk_pid912091 00:34:09.064 Removing: /var/run/dpdk/spdk_pid918412 00:34:09.064 Removing: /var/run/dpdk/spdk_pid920616 00:34:09.064 Removing: /var/run/dpdk/spdk_pid922984 00:34:09.064 Removing: /var/run/dpdk/spdk_pid924312 00:34:09.064 Removing: /var/run/dpdk/spdk_pid926843 00:34:09.064 Removing: /var/run/dpdk/spdk_pid928113 00:34:09.064 Removing: /var/run/dpdk/spdk_pid938613 00:34:09.064 Removing: /var/run/dpdk/spdk_pid939123 00:34:09.064 Removing: /var/run/dpdk/spdk_pid939719 00:34:09.064 Removing: /var/run/dpdk/spdk_pid942705 00:34:09.064 Removing: /var/run/dpdk/spdk_pid943375 00:34:09.064 Removing: /var/run/dpdk/spdk_pid943880 00:34:09.064 Removing: /var/run/dpdk/spdk_pid948512 00:34:09.064 Removing: /var/run/dpdk/spdk_pid948756 00:34:09.064 Removing: /var/run/dpdk/spdk_pid950297 00:34:09.064 Removing: /var/run/dpdk/spdk_pid950903 00:34:09.064 Removing: /var/run/dpdk/spdk_pid951004 00:34:09.064 Clean 00:34:09.064 15:39:18 -- common/autotest_common.sh@1451 -- # return 0 00:34:09.064 15:39:18 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:34:09.064 15:39:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:09.064 15:39:18 -- common/autotest_common.sh@10 -- # set +x 00:34:09.324 15:39:18 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:34:09.324 15:39:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:09.324 15:39:18 -- common/autotest_common.sh@10 -- # set +x 00:34:09.324 15:39:18 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:09.324 15:39:18 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:09.324 15:39:18 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:09.324 15:39:18 -- spdk/autotest.sh@391 -- # hash lcov 00:34:09.324 15:39:18 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:09.324 15:39:18 -- spdk/autotest.sh@393 -- # hostname 00:34:09.324 15:39:18 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-13 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:09.324 geninfo: WARNING: invalid characters removed from testname! 00:34:35.978 15:39:43 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:36.237 15:39:45 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:38.146 15:39:47 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:39.530 15:39:48 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:41.444 15:39:50 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:42.829 15:39:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:44.215 15:39:53 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:44.215 15:39:53 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:44.215 15:39:53 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:44.215 15:39:53 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:44.215 15:39:53 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:44.215 15:39:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.215 15:39:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.215 15:39:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.215 15:39:53 -- paths/export.sh@5 -- $ export PATH 00:34:44.215 15:39:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:44.215 15:39:53 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:44.215 15:39:53 -- common/autobuild_common.sh@444 -- $ date +%s 00:34:44.215 15:39:53 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721050793.XXXXXX 00:34:44.215 15:39:53 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721050793.U3COcF 00:34:44.215 15:39:53 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:34:44.215 15:39:53 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:34:44.215 15:39:53 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:44.215 15:39:53 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:44.215 15:39:53 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:44.215 15:39:53 -- common/autobuild_common.sh@460 -- $ get_config_params 00:34:44.215 15:39:53 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:34:44.215 15:39:53 -- common/autotest_common.sh@10 -- $ set +x 00:34:44.476 15:39:53 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:44.476 15:39:53 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:34:44.476 15:39:53 -- pm/common@17 -- $ local monitor 00:34:44.476 15:39:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:44.476 15:39:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:44.476 15:39:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:44.476 15:39:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:44.476 15:39:53 -- pm/common@21 -- $ date +%s 00:34:44.476 15:39:53 -- pm/common@25 -- $ sleep 1 00:34:44.476 15:39:53 -- pm/common@21 -- $ date +%s 00:34:44.476 15:39:53 -- pm/common@21 -- $ date +%s 00:34:44.476 15:39:53 -- pm/common@21 -- $ date +%s 00:34:44.476 15:39:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721050793 00:34:44.476 15:39:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721050793 00:34:44.476 15:39:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721050793 00:34:44.476 15:39:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721050793 00:34:44.476 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721050793_collect-vmstat.pm.log 00:34:44.476 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721050793_collect-cpu-load.pm.log 00:34:44.476 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721050793_collect-cpu-temp.pm.log 00:34:44.476 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721050793_collect-bmc-pm.bmc.pm.log 00:34:45.417 15:39:54 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:34:45.417 15:39:54 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:34:45.417 15:39:54 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:45.417 15:39:54 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:45.417 15:39:54 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:45.417 15:39:54 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:45.417 15:39:54 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:45.417 15:39:54 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:45.417 15:39:54 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:45.417 15:39:54 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:45.417 15:39:54 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:45.417 15:39:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:45.417 15:39:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:45.417 15:39:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:45.417 15:39:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:45.417 15:39:54 -- pm/common@44 -- $ pid=964145 00:34:45.417 15:39:54 -- pm/common@50 -- $ kill -TERM 964145 00:34:45.417 15:39:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:45.417 15:39:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:45.417 15:39:54 -- pm/common@44 -- $ pid=964146 00:34:45.417 15:39:54 -- pm/common@50 -- $ kill -TERM 964146 00:34:45.417 15:39:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:45.417 15:39:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:45.417 15:39:54 -- pm/common@44 -- $ pid=964148 00:34:45.417 15:39:54 -- pm/common@50 -- $ kill -TERM 964148 00:34:45.417 15:39:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:45.417 15:39:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:45.417 15:39:54 -- pm/common@44 -- $ pid=964171 00:34:45.417 15:39:54 -- pm/common@50 -- $ sudo -E kill -TERM 964171 00:34:45.417 + [[ -n 352912 ]] 00:34:45.417 + sudo kill 352912 00:34:45.428 [Pipeline] } 00:34:45.445 [Pipeline] // stage 00:34:45.451 [Pipeline] } 00:34:45.470 [Pipeline] // timeout 00:34:45.475 [Pipeline] } 00:34:45.494 [Pipeline] // catchError 00:34:45.500 [Pipeline] } 00:34:45.519 [Pipeline] // wrap 00:34:45.525 [Pipeline] } 00:34:45.541 [Pipeline] // catchError 00:34:45.550 [Pipeline] stage 00:34:45.552 [Pipeline] { (Epilogue) 00:34:45.568 [Pipeline] catchError 00:34:45.570 [Pipeline] { 00:34:45.585 [Pipeline] echo 00:34:45.586 Cleanup processes 00:34:45.592 [Pipeline] sh 00:34:45.884 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:45.884 964256 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:45.884 964693 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:45.899 [Pipeline] sh 00:34:46.189 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:46.189 ++ grep -v 'sudo pgrep' 00:34:46.189 ++ awk '{print $1}' 00:34:46.189 + sudo kill -9 964256 00:34:46.203 [Pipeline] sh 00:34:46.516 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:58.753 [Pipeline] sh 00:34:59.042 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:59.042 Artifacts sizes are good 00:34:59.058 [Pipeline] archiveArtifacts 00:34:59.066 Archiving artifacts 00:34:59.251 [Pipeline] sh 00:34:59.537 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:59.554 [Pipeline] cleanWs 00:34:59.565 [WS-CLEANUP] Deleting project workspace... 00:34:59.565 [WS-CLEANUP] Deferred wipeout is used... 00:34:59.572 [WS-CLEANUP] done 00:34:59.575 [Pipeline] } 00:34:59.597 [Pipeline] // catchError 00:34:59.611 [Pipeline] sh 00:34:59.901 + logger -p user.info -t JENKINS-CI 00:34:59.911 [Pipeline] } 00:34:59.929 [Pipeline] // stage 00:34:59.935 [Pipeline] } 00:34:59.954 [Pipeline] // node 00:34:59.960 [Pipeline] End of Pipeline 00:34:59.996 Finished: SUCCESS